Apples M1 Processor: The hardware behind the hype

Поділитися
Вставка
  • Опубліковано 9 вер 2024

КОМЕНТАРІ • 206

  • @CodingCoach
    @CodingCoach  3 роки тому +22

    To my class (and anyone else who wants to participate in a good discussion): What do you think? Is this a move in the right direction? If so, will the PC market be forced to follow suit and in what ways do you see them accomplishing such a shift? Will other platforms that are ARM based (like Android or Chrome OS possibly?) start growing and being used more heavily for desktop like use becoming the new competition? Or do you think Intel / AMD and the current desktop CISC chips are here to stay?

    • @roxannelai1514
      @roxannelai1514 3 роки тому +3

      Hi Professor, here are my answers to your questions:
      What do you think?
      Answer = I think that moving desktops from CISC to RISC is a smart move, because as you said, with fixed length instruction it makes the decode process much simpler (and therefore faster), which makes for better pipelines. A lot of people complain about the battery life of Apple products, so by moving to RISC, Apple’s products will have better power efficiency, battery life, and performance. As a user, these are all very important things, especially when it comes to writing and running programs. I have an Apple laptop (a very heavy and old one), and I am always annoyed when I try to run programs on it or use it for extended periods of time, and it freezes up or gives me the “spinning pinwheel of doom.” The M1 processor could fix some of these problems, and make the user experience much better.
      Is this a move in the right direction?
      Answer = I think this is a move in the right direction because of all of the demands that users have of Apple products, and our technology in general. Apple is known for its “pretty and easy to use” interface(s), but processing speeds can always be improved upon. In the CS field, a lot of people look down on those who use Apple computers for coding and such, and I think this is because the processing speeds of Apple laptops are not as fast as say a Dell or Linux machine (I could be wrong, but this is just from my personal experience). So I would say that Apple’s shift to RISC, and trying to make the processing faster is a move in the right direction.
      If so, will the PC market be forced to follow suit and in what ways do you see them accomplishing such a shift?
      Answer = I think that the PC market will probably follow suit, because everyone wants to be the company with “the fastest computers,” and if Apple has the hypothetical potential to do 8X the performance with their “ultra-wide execution architecture,” I think the competition will try to follow suit. Like what happened with the IBM clones, I think companies will try to reverse-engineer the M1 chip, and see if they can find a way to mimic what Apple has done.
      Will other platforms that are ARM based (like Android or Chrome OS possibly?) start growing and being used more heavily for desktop like use becoming the new competition?
      Answer = I can see how other ARM based things like Android and Chrome could become new competition, but I think that since Apple has a head-start in popularizing the shifting of desktops from CISC to RISC, I think it will be very hard to overtake them.
      Or do you think Intel / AMD and the current desktop CISC chips are here to stay?
      Answer = I personally don’t think Intel / AMD and CISC chips are the most efficient for desktops, but it might take a while for other companies to shift to RISC. I think right now, Apple has a head start, and it might take a while for everyone else to catch up. I also think that other companies might wait and see how Apple’s new M1 goes over with users, and if it proves to be the best, then other companies will start to follow suit. They want to see if it will work out before following Apple off of a potential cliff.
      I hope this is how you wanted us to answer! Thanks, ~ Roxanne Lai

    • @mackeypaints
      @mackeypaints 3 роки тому +1

      I believe the switch from a CISC architecture to a RISC architecture would be beneficial for Apple, and I think this will do a great job in keeping them up to speed with the competition. Apple laptops (less so now, but still a bit), can be seen as a bit too expensive for the power that you are getting, so assuming the price doesn't skyrocket this could be a great way to add a bit more value to the purchase. The extended battery life is always a nice benefit, and the faster performance speed will help the new Apple products stack up against competing products at a similar price point. This seems like a move in the right direction for Apple, and may help them look better in the public eye. In my opinion, Apple makes great products that are just too expensive to justify what you're spending to get them. With not that much work, I could find a PC of a much cheaper price point that has similar power. I think this will help Apple move out of the easy option from a computer standpoint. Now, when talking about if PC will follow suit, I think it depends on just how much of an improvement this chip ends up being. If this chip ends up being a huge success for Apple, and puts them on top, then I could see PC following suit. However, PCs and Apple computers already fill different consumer niches, so unless those niches change like I just mentioned I don't see PC following suit. I don't see other ARM based architectures being a huge competitor. Apple gets so much business off of name brand recognition alone, which gives them an edge over other ARM based architectures, especially over Android. I still think the current CISC chips will be here to stay for a while, but they may not be as prevalent as they once where. Apple has a lot of power over the technology world, so them moving to a RISC architecture could be a reason for companies to switch to follow the trends. However, some companies may stay with CISC because its what they know and works for them. I think as long as CISC still has some benefits over RISC, companies will continue to use it.

    • @CodingCoach
      @CodingCoach  3 роки тому +3

      @@roxannelai1514 Hi Roxanne, great observations! Thank you for sharing them! I think your observations on current ARM manufactures entering the desktop space is something to watch for!

    • @CodingCoach
      @CodingCoach  3 роки тому +4

      @@mackeypaints I like your point about PC and Mac filling different niches.. it is often too easy to group things and assume outcomes in a hasty manor. Thank you!

    • @ancientgearsynchro
      @ancientgearsynchro 3 роки тому

      To be honest, this does seem like progress for progress sake in my eye. Yes reduced does have more freedom, but with more moving parts mean more space to crash and burn, as my history with coding has taught me. I also don't think apples has enough pull to really move the market in its direction, so while I do think their will be more with risc as their building blocks, I don't think it will be as revolutionary as the OG Macintosh computer. Also do I think Chrome OS will become mainstream, hard no. (Source: Me who is stuck with a Chromebook laptop)
      Signed Matt.

  • @MrSamPhoenix
    @MrSamPhoenix 3 роки тому +53

    With the exception of using “Gigabits” instead of “GigaBytes” as the measurement of the RAM, this video is flawless in explaining the A14/M1 architecture.

    • @CodingCoach
      @CodingCoach  3 роки тому +11

      thank you! I apologize again for the gigabit slips.

    • @JoshWalker1
      @JoshWalker1 3 роки тому +4

      @@CodingCoach haha. It’s apparently a pretty ingrained bad habit - I just heard “megabit” instead of byte in the cache section too

    • @CodingCoach
      @CodingCoach  3 роки тому +4

      Yeah just subconscious wan't paying enough attention..

    • @MrSamPhoenix
      @MrSamPhoenix 3 роки тому +5

      @@CodingCoach - keep up the great work

    • @tiqo8549
      @tiqo8549 3 роки тому +4

      We know he's talking about things he knows a lot about. So...i knew he ment Gb. I don't care..

  • @mandelbro777
    @mandelbro777 3 роки тому +10

    Awesome explanation. 1000% better than any comparable videos and you did it in only half an hour !! Legendary

  • @dannyh807
    @dannyh807 3 роки тому +2

    First year compSci student from the UK here, it is so nice hearing this explained in such a technical way and you did a amazing job of been clear and concise here and have earned my subscription. Thank you for this video.

  • @Alakazam2047
    @Alakazam2047 3 роки тому +14

    Wish you taught my ECE course when I was taking this course. You keep the subject engaging with current news, and it isn't dull like my professors that just teach straight out of the book or without current applications.
    With the discussion between CISC vs RISC the benefits of RISC is certainly noticeable, but could you also talk a bit about what will happen to CISC in the next few years as the popularity of RISC increases?

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      I wanted to try avoiding speculation in the video. I think that (especially after reading through everyones thoughts in the comments) that there are many variables and while I do think there are some inherent differences that might favor RISC in the future there are obviously huge organizational choices that Apple made (and has an easier time doing because it is their chip for their machines running their software). I do think that SOC is the future and that as has happened before, the rest of the industry will start moving in this direction.
      That is the best part about the future, it keeps you guessing :)

  • @mrjean9376
    @mrjean9376 3 роки тому +6

    Please make this kind of analyst video lecture more often, This is very very great to improve everyone knowledge about computer science! Very great channel

  • @stachowi
    @stachowi 3 роки тому +12

    so the YT algo showed me your video and you did an amazing job... I'm a EE/CS out of university now for 20 years and this was very well done.
    I subscribed.

  • @bat_daddy6455
    @bat_daddy6455 3 роки тому +3

    I think that the cache certainly plays a great role for m1 amazing performance. It is evident how cache can actually make quite a difference in the latency and the speed of the chip

  • @suntzu1409
    @suntzu1409 3 роки тому +7

    You are absolutely underrated

  • @mauriciobuendia2379
    @mauriciobuendia2379 Рік тому +1

    I stumbled upon this video in my recommended and I enjoyed it incredibly, I'm a 2nd year electrical engineering student in NL and this was beautiful to watch and motivated me so much.
    Definitely got my subscription and I will stayed tuned for more content like this!

  • @teeI0ck
    @teeI0ck 3 роки тому +6

    showing an accurate and deep understanding; great perceptive. 💡
    Muito obrigado for all the insightful information. 🤝

  • @Giigigi1122
    @Giigigi1122 3 роки тому +1

    Not like other yotuber where just showing benchmarks and saying it is killing it. This is way more interesting and informative. And, he was right in every aspects of M1 chip!

  • @vernearase3044
    @vernearase3044 3 роки тому +4

    You keep saying 16 gigabits, but it's up to 16 gigabytes of storage.
    M1 has 4 high performance Firestorm and 4 high efficiency Icestorm cores - it was designed for the low-end MacBook Air (fanless) and 13" MacBook Pro models as part of their annual spec bump.
    Rumor has it the M1x slated for 2021 will have 8-16 Firestorm cores (depending on binning) and will be targeted at machines like the 16" MacBook Pro and possibly the low end iMac (and maybe a high end Mac Mini).
    In 2008, Apple acquired PA Semi and worked with cash strapped Intrinsity and Samsung to produce a FastCore Cortex-A8; the frenemies famously split and Apple used their IP and Imagination's PowerVR to create the A4 and Samsung took their tech to produce the Exynos 3. Apple acquired Intrinsity and continued to hire engineering talent from IBM's Cell and XCPU design teams, and hired Johny Srouji from IBM who worked on the POWER7 line to direct the effort.
    This divergence from standard ARM designs was continued by Apple who continued to nurture and build their Silicon Design Team (capitalized out of respect) for a decade, ignoring standard ARM designs building their own architecture, improving and optimizing it year by year for the last decade.
    Whereas other ARM processor makers like Qualcomm and Samsung pretty much now use standard ARM designed cores - Apple has their own designs and architecture and has greatly expanded their own processor acumen to the point where the Firestorm cores in the A14 and M1 are the most sophisticated processors in the world with an eight wide processor design with a 690 instruction execution queue with a massive reorder buffer and the arithmetic units to back it up - which means its out-of-order execution unit can execute up to eight instructions _simultaneously._
    x86 processor makers are hampered by the CISC design and a variable instruction length. This means that at most they can produce a three wide design and even for that the decoder would have to be fiendishly clever, as it would have to guess where one instruction ended and the next began.
    There's a problem shared with x86/64 processor makers and Windows - they never met an instruction or feature they didn't like. What happens then is you get a build-up of crud that no one uses, but it still consumes energy and engineering time to keep working.
    AMD can get better single core speed by pushing up clocks (and dealing with the exponentially increased heat though chiplets are probably much harder to cool), and Intel by reducing the number of cores (the top of 10900K actually had to be shaved to achieve enough surface area to cool the chip so it at 14nm had reached the limits of physics). Both run so hot they are soon in danger of running into Moore's Wall.
    Apple OTOH ruthless pares underused or unoptimizable features.
    When Apple determined that ARMv7 (32 bit ARM) was unoptimizable, they wrote it out of iOS, and removed those logic blocks from their CPUs in _two years,_ repurposing the silicon real estate for more productive things. Intel, AMD, and yes even Qualcomm couldn't do that in a _decade._
    Apple continues that with _everything_ - not enough people using Force Touch - deprecate it, remove it from the hardware, and replace it with Haptic Touch. Gone.
    Here's another secret of efficiency - make it a goal. Last year on the A13 Bionic used in the iPhone 11s, the Apple Silicon Team introduced hundreds of voltage domains so they could turn off parts of the chip not in use. Following their annual cadence, they increased the speed of the Lightning high performance and the Thunder high efficiency cores by 20% despite no change in the 7nm mask size. As an aside, they increased the speed of matrix multiplication and division by six times (used in machine learning).
    This year they increased the speed of the Firestorm high performance and Icestorm high efficiency cores by another 20% while dropping the mask size from 7nm to 5nm. That's a hell of a compounding rate and explains how they got to where they are. Rumor has it they've bought all the 3nm capacity from TSMC for the A16 (and probably M2) in two years.
    Wintel fans would deny the efficacy of the A series processors and say they were mobile chips, as if they used slower silicon with wheels on the bottom or more sluggish electrons.
    What they were were _high efficiency_ chips which were passively cooled and living in a glass sandwich. Remove them from that environment where they could breathe more easily and boost the clocks a tad and they became a raging beast.
    People say that the other processor makers will catch up in a couple of years, but that's _really_ tough to see. Apple Silicon is the culmination of a decade of intense processor design financed by a company with _very_ deep pockets - who is fully cognizant of the competitive advantage Apple Silicon affords. Here's an article in Anandtech comparing the Firestorm cores to the competing ARM and x86 cores. It's very readable for an article of its ilk:
    www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive
    Of course these are the Firestorm cores used in the A14, and are not as performant as the cores in the M1 due to the M1's higher 3.2 ghz clock speed.

    • @CodingCoach
      @CodingCoach  3 роки тому

      Hi Verne, thank you for sharing your thoughts, you have amassed a lot of information on this topic. While I agree that the rest of the industry will be following this direction in the long term (AMD has been producing fast chips) I do believe in the long term CISC is a loosing proposition as it will be much harder to be competitive fighting the natural advantages of RISC highly concurrent and low power nature. I have a feeling though that solutions might appear relatively quickly from the existing mobile space (Qualcomm, Samsung, etc) and Windows for ARM already exists. While I do not think it will be easy or overnight I can see the transition happening.

    • @vernearase3044
      @vernearase3044 3 роки тому +1

      @@CodingCoach Qualcomm's latest - the 888 - uses a triple tier processor structure with a single Cortex-X1 (another ARM design) at the top of the pyramid.
      Cortex-X1 at 5nm is slower than the Lightning cores in last year's A13 at 7nm (except for floating point where it has the edge).
      Here's an Anandtech article comparing A13's Lightning cores to competitors.
      www.anandtech.com/show/15875/apple-lays-out-plans-to-transition-macs-from-x86-to-apple-socs
      AMD _did_ have an ARM project which according to rumor they're now in the process of reanimating.
      The problem is that most processor makers have been resting on their laurels and counting on shrinking lithography and different assembly techniques like AMD's chiplets (to improve yield) and boosting clocks to increase speed, but overvolting increases heat exponentially and a lot of PC development has gone into transferring that heat out of the chip to keep it from burning up and hasn't attacked the problem at the core.
      Processors are one of the few consumers of power which actually accomplish no work in the physics sense - they don't move that kilogram weight up or across the floor a meter - they simply ingest bits, rearrange them, and spit them back out.
      Most of the power is spent pushing a timing clock up in frequency and removing the resultant heat that is thus produced.

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      I agree with you, the next couple of years are going to very interesting! I will have to see what I can dig up on AMD's ARM project, wonder why it was mothballed to begin with? Thank you for taking the time to adding a valuable perspective!

    • @vernearase3044
      @vernearase3044 3 роки тому +1

      @@CodingCoach Dunno ... never knew anything about it but one of the comments on one of the Linus Channels intimated this was taking place.
      Makes sense ... the writing's kinda on the walls for x86 since they're so energy inefficient and the way forward into wider CPUs is blocked by the nature of the beast.
      Not that I think going wider than eight is going to yield that much benefit ... I'd imagine that the Apple Silicon Team raised a toast whenever they managed to get sample code to trigger eight parallel instructions - though I suppose that would happen more frequently if you have long inline compute segments.

    • @JanTomas123
      @JanTomas123 3 роки тому +1

      @@vernearase3044 This is the most interesting coment section I've read in a while.
      I've always been mad at the fact that laptops sucked because they had no way of efficiently disipate heat... When I heard Apple was making their own chips I thought it would be a money move and they would be similar as Intel/AMD but after digging a little bit I'm REALLY excited about the future of computing! In the end it's about making useful machines and seems we're going the right way!
      Thanks for the valuable info!

  • @fireelement20
    @fireelement20 3 роки тому +2

    Great content, but the low quality audio makes it hard to stay focused. It would be awesome to hear you in better quality!

    • @CodingCoach
      @CodingCoach  3 роки тому +2

      Thank you, and I agree! I originally created it as a "bonus" lecture for my class... I have been learning / investing in creating better quality because I see that it might have a wider audience. Thank you for the feedback and stay tuned!

  • @beback_
    @beback_ 2 роки тому +1

    I was prepared to roll my eyes at yet another instance of empty Apple marketing hype but it turned out quite amazing.

  • @NDakota79
    @NDakota79 3 роки тому +23

    Watching this on my M1 Macbook Pro

    • @Teluric2
      @Teluric2 3 роки тому +2

      Watching this on my 16 core ryzen 128gb pixel while rendering 4k 60fps 10 bit like butter.

    • @jakeausten9673
      @jakeausten9673 3 роки тому

      @@Teluric2 @Tankado Oh yeah?
      Watching on my Ryzen Radeon RX 580, 8GB while rendering at 480p, not sure the FPS, it's OK.
      Browser runs like sludge.

    • @flaguser4196
      @flaguser4196 3 роки тому +1

      watching this on a punch card computer.

    • @workhardforit
      @workhardforit 3 роки тому +1

      @@Teluric2 Crap. Watching this on my 60 core Mactini with 128kb ultramegahyper unified memory with display, keyboard, or mouse.
      Comes with AirPlay 2000 and wirelessly sends video to my visual cortex.
      The 360 core neural engine predicts what I want to do before I even make a decision.
      You don’t even have to think anymore.

    • @Teluric2
      @Teluric2 3 роки тому

      @@workhardforit peanuts , My dog have a 128 bit cpu 512 core with 8tbyte on die ram.

  • @adrianchallinor7045
    @adrianchallinor7045 3 роки тому +3

    Stumbled across this video and enjoyed it enormously. I well remember my courses (in 1977-80!) discussing computer architecture. We had to theoretically design a variable architecture bit sliced computer. Unfortunately the University point blanked refused to let out course have assess to a fab plant to try this out.
    But it did get us, as a class, noticed by the CPU vendors.
    One other thing I really like about the Apple M1 is that the I and D space is separate. This is a seriously good security process because now they can insist that only the instruction loader can load in to I-Space and instructions can only execute from I-Space. This instantly means that a buffer overrun in D-Space can't overrun in the I-Space. One interesting thought is where they put the call/return stack. I don't know the answer to that.
    And now back to my day job of coding directed graph databases.

    • @jamieangus-whiteoak3656
      @jamieangus-whiteoak3656 2 роки тому

      @Aidrian Challinor, That was the exact same time I was trying to design a general purpose signal processor! Mine ended up being a risc design! Wow that was an interesting time to do hardware, as the speed of development was intense!

  • @adamp9553
    @adamp9553 3 роки тому +1

    The big difference between RISC and CISC is opcode definition; RISC sounds better than something like ROC, reduced opcode computing. Effective address is part of the overall instruction variant count which makes CISC more complex than the mere fact that it uses varying instruction word sizes. The 6502 has relatively few instruction names but over 150 variants total, up to three opcode sizes.
    There's a balance in implementation based on the demands of the computer, number of registers, ease of coding, and throughput. And ARM has high throughput at low power.

  • @sihasnesh2016
    @sihasnesh2016 3 роки тому +2

    Great video ! Hope you good luck with your channel 😊.

  • @frantisekjilek2533
    @frantisekjilek2533 3 роки тому +3

    Very interesting. It would be interesting some comparison of GPU to other units on market. I have little clue how many of EU and threads they have and how they stands up in effectiveness.

  • @urospocek4668
    @urospocek4668 3 роки тому +2

    Great explanation. Have you considered making follow up to this video now when we have much more informations, benchmarks and new M1 iMacs coming out soon? I think that would be very useful for everyone. Thank you in advance and good luck with the channel.

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Yes, In particular the differences that ARM v9 will bring for next processor, stay tuned...

  • @devcybiko
    @devcybiko 2 роки тому

    To your point - the 16GB of RAM backed by highly efficient SSD Virtual Memory means you don't really need 32GB of RAM. I've seen benchmarks where apps that normally run in 32GB RAM are very comfortable in 16GB of RAM because the swapping is so efficient.

  • @souravSP
    @souravSP 3 роки тому +3

    This was extremely helpful! Thank you for making the effort!

  • @BrunoSimioni
    @BrunoSimioni 3 роки тому +2

    Loved this video! Thanks for sharing that and rescuing all the history!

  • @zc7504
    @zc7504 2 роки тому +1

    one of the coolest profs i have known!

  • @madmotorcyclist
    @madmotorcyclist 3 роки тому +3

    Unified memory harkens me back to the old hardware lisp machines that preceded this concept.

    • @CodingCoach
      @CodingCoach  3 роки тому

      I am fascinated by the history of Lisp and Lisp machines. I have been digging a bit to see if I can find more information on the architecture and memory layouts, but there were many machines over the years.

    • @madmotorcyclist
      @madmotorcyclist 3 роки тому +2

      @@CodingCoach Actually got to use Symbolics machines (fridgerator size) back in the 80s. Those machines were way ahead of their time, even though they still had wire wrapped boards. The OS was amazing and adaptable and the CLIM implementation made GUI work a breeze. Lisp itself is a very efficient language and easy to maintain because of its inherit dynamic class and method implementation which was superior to C++ and other languages that require linking. As an example another co-worker and I wrote a graphical semi-automatic planning and scheduling tool for spacecraft that was used for several missions. That only took 28k lines of lisp code. Later it was translated over to Java and C++ and it took 300k lines of code. Too bad the language is now only used sparingly in AI research circles.

  • @adalbertocaldeirabrantfilh3127
    @adalbertocaldeirabrantfilh3127 2 роки тому +1

    Great , vídeo ! Congrats my friend.

  • @AltMarc
    @AltMarc 3 роки тому +4

    In the old days (Acorn), the main advantage of RISC vs CISC was the amount of processor register, where you needed to place the content of the memory before applying an operation and the ability of multi-op per instruction.
    But nowadays I'm a bit lost, how this works with cache?
    The unified memory is also a current theme, with Nvidia's Jetson Xavier, Clara and their Network Card...
    Was look for in-depth knowledge about the M1 without diving into apple's LLVM. Why is it so fast? How many processor register does it have?...

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      The M1 is an ARM 64 bit architecture. I believe there are 31 general purpose registers.
      I think the biggest differences today between CISC and RISC revolve around the improved decoding because of a smaller instruction set and easier to decode fixed length instructions. CISC has tried to keep up but it always means more complex decoding with leads to higher power consumption. simple instructions are easier to decode leading to wider possible pipelines and more concurrency.
      Cache is very important but not necessarily because of a RISC architecture. Apple is just generous with how much cash they implement.
      The unified memory architecture and SOC are additional efficiencies that again are not RISC dependent but are commonly used in modern RISC architectures.

    • @AltMarc
      @AltMarc 3 роки тому +1

      @@CodingCoach Thanks for the answer, I remember the time I upgraded my Acorn Risc PC's ARM610 to the StrongArm manufactured by DEC, and it's incredible speed gain, specially with Basic code bc it could stay in it's cache when executed.
      By the way my Jetson Xavier also have 8 ARM64 cores but also 512 GPU cores (and 32GB RAM wich are the most important feature when playing with Neuronal Networks...).
      But I don't know, how much did Apple modified their processor away from the ARM64 architecture.

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Architecturally it should be an ARM 64 design, they would have to abide by the ISA otherwise programs would not be able to run. That said it is on the organizational side (pipelines, number of execution units, size of cache) that they can take any liberties they want. This is where any additional performance is found.
      I love playing around with old hardware of all sorts, if you enjoy hardware at a low level I highly recommend Ben Eater's channel if you have not seen it. I have my students building software emulators for the old 6502's but Ben actually explores building the chips from nothing. Link to his channel: ua-cam.com/channels/S0N5baNlQWJCUrhCEo8WlA.html

    • @AltMarc
      @AltMarc 3 роки тому +1

      @@CodingCoach The old 6502 from the Commodore ViC-20 and BBC Micro...
      I was 13 when I began with the VIC-20 and then the famous C-64, self taught assembler and hardware mods...
      At 19 I got into Acorn Risc Machines (ARM), Archimdes A3000 and later the RiscPC (RiscOS was also a great OS)
      Also, I still have a Newton MP2100, with a Strong ARM chip, which was also inside of my Sharp's Zaurus C760.... and so on.
      It's still only a hobby for me,
      but on your question about the future of ARM: your student could go bare metal, on the STM32's cortex M4 and co. OR going with Nvidia (they bought ARM) Jetson series using Linux on ARM, at least it's open-source and powerful too.

    • @CodingCoach
      @CodingCoach  3 роки тому

      Good idea! I actually have an older jetson tx2 that is currently not doing anything and some students looking for hardware related honors projects!

  • @IsometricLight
    @IsometricLight 2 роки тому +1

    Great video! Thanks for the information!

  • @sterhax
    @sterhax 3 роки тому +8

    Is there a reason you keep saying gigabit instead of gigabyte? And thanks, I’m a developer who was trying to find precisely this information. Few channels have done anything besides get excited about benchmarks.

    • @CodingCoach
      @CodingCoach  3 роки тому +2

      No, apologies for the gigabit slips, memory is definitely Gigabyte. And I am glad you found it helpful!

  • @shivanshtomar8596
    @shivanshtomar8596 3 роки тому

    Great video! I would work on the audio because it's clipping, and the Picture in Picture covering your slide. But good video, keep it up!

  • @fbifido2
    @fbifido2 3 роки тому +1

    it's Aug 2021, has any tech tubers tested the speed of the Apple M1 and try to ascertain the clock speed ???

  • @gw7624
    @gw7624 9 місяців тому

    Really interesting video. Do you suspect the power efficiency of the M1/2/3 is more down to the adoption of RISC or the emphasis on high IPC and relatively lower clock speeds?

  • @lestereo
    @lestereo 2 роки тому +1

    Thank you. Fantastic content.

  • @richardbussiere9178
    @richardbussiere9178 3 роки тому +2

    The thing I can't get my head around is that "fabric". All these elements (8xGPU, Neural Engine, etc) are contending for that "Unified Memory". Why are there not memory contention issues? What secret sauce is inside that fabric? Is it a crossbar? Is it another layer of cache?

    • @CodingCoach
      @CodingCoach  3 роки тому

      Yes details on the fabric are annoying hard to find. Searches on "Unified Memory Architecture" yield not much detail.

    • @piotrd.4850
      @piotrd.4850 3 роки тому

      Probably hardware implementation of transaction memory & service bus.

    • @Teluric2
      @Teluric2 7 місяців тому

      There are no secret sauce on chips design. You re worshipping apple ina sick way.
      The unified memory Apple is using was used in the same way by SGI in 1997 in RISC desktops. They used also crossbar and dedicated bus that allowed to open a 60gigabyte image file from satellites in 2 secs.

  • @Yusufyusuf-lh3dw
    @Yusufyusuf-lh3dw 3 роки тому +2

    Thanks. That was a nice presentation. I would however disagree with some of your observations, especially about the decoder in cisc CPUs being very complex module that consumes too much of power. The decoder is actually quite a simple module that doesn't consume a lot of power. This generates uop trace that gets executed much like the RISC. Even though it's not completely RISC. The disadvantage of x86 is that it carries a huge baggage of legacy architecture support that consumes a lot of ucode and brings with it lack of power efficiency. Another problem with x86 is that it has a set of industry standard interfaces that are required at, hardware, software and firmware level which again adds complexity and power consumption penalty. Third and the most important thing is that apple M1 CPUs don't have most common interfaces like DDR PCIE thunderbolt and many more legacy interfaces that actually consumes a lot of power. As a matter of fact I have seen some die shots of apples previous generation CPU that shows dram blocks on multiple sides of the die. This clearly indicates dram or part of it is integrated into the soc die, which is a smart move which neither Intel nor AMD will do because it's difficult to convince OEMs to have such fixed memory sizes and configurations. I assume major portion of the performance can be attributed to this as apple can have a very wide on die interface with the built-in dram blocks. Typical dram access latency on modern CPU over DDR is in the order of hundreds of clocks and driving the DDR bus on PCB consumes heck of power. Typical access latency of a well organised LLC is in the order of 10 to 15 clocks. Bringing the dram inside the die would mean apple can improve the dram latency by more than 50%. I assume the dram access latency to be of the order of 25 to 30 cycles.
    Dram on die also means no serdes and much simpler cache prefetch logic. This also simplifies the branch predictors and that obviously means a very wide execution unit running at much lower clock speeds but getting huge performance numbers.
    This is a very wise decision for apple because they don't have to bother about supporting different operating systems and hundred different hardware configurations. Can apple scale that to desktop with industry standard interfaces. Well... I don't think so.. 🙄

    • @CodingCoach
      @CodingCoach  3 роки тому

      Thank you for your response!
      I believe you are correct about the legacy effect on x86, I am going to do more reading on your points.
      I also agree with your thoughts on Apple's DRAM integration. I am willing to bet that other manufactures that have balked at set memory configurations in the past may find themselves signing on for just that in the future. The advantages may be just be too great.

    • @saurondp
      @saurondp 3 роки тому

      Apple's SOC supports all of the interfaces you mentioned, not sure where you are getting that it doesn't. And on the M1 Macs, the DRAM is on the CPU package, not on the processor die.

    • @Yusufyusuf-lh3dw
      @Yusufyusuf-lh3dw 3 роки тому

      @@CodingCoach The unique advantage that apple have which no one else got is, they own the cpu, the platform, the firmware and the os. And they don't have to listen to any customer methodologies and strategies. It's Just one single platform design and one operating system to support which makes them ready effective. But then bulk of the ecosystem is not in mac OS and bulk of applications don't support their architecture. There are both pros and cons. Only time will tell whether they will be successful on a large scale.

    • @Yusufyusuf-lh3dw
      @Yusufyusuf-lh3dw 3 роки тому

      @@saurondp they don't have a DDR bus outside their M1 package. Secondly I have seen some die shots showing few dram modules inside the CPU die. If that is the case, then it's much more easier to get high amount of efficiency and performance if they are willing to take the associated yield and density challenges. DDR bus itself takes quite a lot of power to run the signals through the board.

  • @twelveightyone
    @twelveightyone 3 роки тому +3

    Subscribed. Great video, thanks 👍

  • @idontcare-tk1te
    @idontcare-tk1te 2 роки тому +2

    What could be the reason behind not including an L3 cache? I have not been able to figure it out yet. Also, now that apple has released the new version of M1 SoC, it looks like a good time to make a video on it.
    Another thing I'd request you to cover is the possible performance benefits offered by the improvement in data bandwidths in the new M1 SoC

    • @CodingCoach
      @CodingCoach  2 роки тому

      level 1 2 3 4 etc cache just means that there are more layers of cache.. not that it's any more efficient or better to have it. The lower the level the faster (lower latency) the cache and usually the more expensive. providing a larger level 2 cache could possibly remove any benefit of having a level 3 cache.

    • @CodingCoach
      @CodingCoach  2 роки тому

      I'm not the normal "Tech UA-camr" The only reason I made this video was because it was relevant to a course I was teaching. you do make a great point though about the improvement in data bandwidths on the new m1 pro and Max.. I will look into that

  • @petrdvoracek670
    @petrdvoracek670 3 роки тому +2

    Is there any video on how to build virtual machine in typescript?

    • @CodingCoach
      @CodingCoach  3 роки тому

      Not that I am aware of :)
      It would certainly take more than a video and it is the practical part of my organization and architecture course. If there is enough interest I would consider a video series to kick off that project.

    • @piotrd.4850
      @piotrd.4850 3 роки тому

      It already is a VM - why would you do that? Try WebAssembly....

  • @gnkstudios6138
    @gnkstudios6138 2 роки тому +1

    Excellent video. Would love to see you expound on the M1 pro and max chips. You explain things very well. Hence why your a teacher haha.

  • @tipoomaster
    @tipoomaster 3 роки тому +1

    A lot of people seem to be asserting that the M1 is somehow doing more with less RAM. I'm not satisfied with current explanations just mentioning the unified memory, is there anything more to this? It doesn't seem to be compressing more memory because it does it faster than Intel or something like that.

  • @saurondp
    @saurondp 3 роки тому +1

    Great video, and the comment section here has been quite interesting to read. Just subscribed!

  • @1giveme
    @1giveme 3 роки тому +4

    Great Video!

  • @Errcyco
    @Errcyco 3 роки тому

    You need a better microphone setup man this was a brutal listen.

  • @justinnamuco9096
    @justinnamuco9096 2 роки тому

    I suppose the RISC choice solved the universal laptop battery life problem

  • @th3r3alloudking
    @th3r3alloudking Рік тому

    liked and subbed just because of startrek discovery.

  • @john_hind
    @john_hind 3 роки тому +3

    I wonder if you are maybe over-emphasizing the importance of architecture here (and indeed whether CISC versus RISC is a clear-cut distinction today)? The fact that Apple has gone CISC-RISC-CISC-RISC suggests that this was never the deciding factor. I suggest this is primarily about chasing the best silicon process node (Moore's law). In 2006, Intel had the best silicon foundries and reserved them for its own chip designs. Today, Intel has fallen badly behind and the only way Apple can access state-of-the-art silicon foundries is to escape from Intel architectures. A secondary consideration with the current transition is integration at the chip rather than the board level. Apple becomes free to make custom chips on a per-product basis mixing and matching design IP on the same chip saving power and cost.
    Look at the modern instruction set map for ARM and you are forced to conclude that the word 'Reduced' has lost all meaning! RISC is one of those academic computer science theories rather like relational databases that started out pure, simple and minimal but gradually accreted all the complexity of what it was supposed to supplant on contact with the real world! RISC was supposed to displace all the complexity of optimization to the compiler so the runtime could be kept really simple while major resources could be deployed at the less cost or time sensitive compilation stage. Unfortunately if this ever worked, it no longer does because of the rise of just-in-time compilation (for example, Javascript in the browser) which means compilation efficiency is just as important as runtime efficiency and has to be done on the runtime architecture. RISC and CISC have gradually converged and learned from each other until there is no longer a clear cut distinction to be made.

    • @CodingCoach
      @CodingCoach  3 роки тому

      I agree that the history of apples progression and having already left a RISC based powerPC platform does cast some question on RISC intrinsic abilities. However it has been my experience so far that the overwhelming majority of opinion on the matter is favorable for RISC. I would have to dig deeper but I would be willing to wager that the PowerPC at the time may not have been realizing the benefits of additional execution units. Also power consumption might not have a been a heavily weighted factor in the initial design of the architecture.
      Your proposition that manufacturing process is key is a very interesting one, I do not have a deep understanding in this area, but will read more. I live very close to a 300mm ex-IBM facility here in NY, and my knowledge in this area is mostly from conversations with individuals that work this facility, but I have no direct experience.
      While i agree that the lines have grayed between CISC and RISC, I am not sure I am following your point on JIT. That would only apply to interrupted code and static code with a runtime component. And any compilation process already carried out has produced machine instructions. Is there a link or article as a reference? I would like to learn more.
      I believe the distinction that can still be made is the simpler decode which is largely due to the instructions being fixed length. This means the decode is simpler since the length of the instruction does not need to be determined, again leading to the ability (I believe) to produce a wider core capable of more concurrency.
      Thank you for your comment!

    • @john_hind
      @john_hind 3 роки тому

      @@CodingCoach This is interesting:
      archive.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
      (Note we were already in the "Post RISC Era" in 2004!)
      And this:
      riscv.org/blog/2020/08/unlocking-javascript-v8-riscv-open-sourced/#:~:text=At%20the%20heart%20of%20the%20web%20technology%20stack,roadblock%20to%20enabling%20the%20web%20stack%20on%20RISC-V.
      (Note the long list of instruction set extensions needed to efficiently support Javascript.)
      www.tomshardware.com/news/tsmc-5nm-4nm-3nm-process-node-introduces-3dfabric-technology
      Process resolution advantages (the far eastern merchant foundries Apple is using for its ARM chips are essentially now at more than twice the resolution Intel is achieving) give Apple the potential for higher clock rates and/or better power efficiency. They also give it a larger transistor budget which it is able to leverage by adding instruction sets and even whole specialized execution units laser-targeted on improving performance exactly where it matters most for a specific device, which is essentially the polar opposite of the RISC philosophy. We might dub this architecture "I-RISC" (Increased Reduced Instruction Set Computing)! The advantage Apple has is vertical integration, hardware, APIs, compilers and OS - it can retire stuff in ways Intel (and even ARM itself) cannot because they have to maintain binary compatibility.

  • @ManOfSteel1
    @ManOfSteel1 3 роки тому

    Less power consuming products are the future.

  • @mohammadaminmemarzadeh45
    @mohammadaminmemarzadeh45 3 роки тому

    I still don't understand if RISC is power efficient by design, why were the powerpc much more power hungry than intel in 2000s?

  • @diegonayalazo
    @diegonayalazo 3 роки тому +1

    Thanks

  • @GabrielDalposso
    @GabrielDalposso 3 роки тому +2

    I have a few questions:
    - does the chip has any L3 cache shared between the high performance and low power cores? it seems 12MB is big for L2, I can't imagine having a bigger pool of cache for L3...
    - I've looked before and found that USB-C is capable of PCIe up to version 3, so is the slide @ 26:53 correct? (maybe with USB4 they can reach speeds comparable to PCIe 4, but I'm not sure)
    - I know that they have Thunderbolt 3 support because USB4 is backwards compatible with it, but for Thunderbolt 4 I think the manufacturer needs a decoder from Intel (which I don't believe they do), is this correct?

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Hi Gabriel,
      there is no L3 cache just individual l1 caches within each core and the unified l2 cache for all cores.
      as far as the USB and thunderbolt 4 compatibility questions go those slides are directly from the Apple presentation when the chip was announced. I'll have to look and see what additional information has been released in the month that's passed.

    • @GabrielDalposso
      @GabrielDalposso 3 роки тому

      @@CodingCoach thanks for the reply!

  • @michalwojtass1769
    @michalwojtass1769 3 роки тому +1

    Very good explanation - BUT it lacks of information about how M1 could have such good results in running ( = deconding ) x86 applications ....

    • @CodingCoach
      @CodingCoach  3 роки тому +2

      I will see what information is out there on rosetta 2, I have a feeling the answer is going to a combination of efficiently written software keeping the overhead to a minimum, but at the end of the day the overhead is less than or roughly equal to the increase in speed provided by the hardware. Hence the reports that some software runs as well or better then native x86. I will around and see how much information is out there. Would be interesting to learn more.

    • @saurondp
      @saurondp 3 роки тому

      @@CodingCoach I can't confirm this, but from what I've heard the M1 includes support for some x86 instructions. That coupled with Rosetta 2's binary translation occurring at the time of software installation could go a long way in explaining why it has such impressive speeds when running x86 applications.

  • @CommonCentsRob
    @CommonCentsRob 2 роки тому +1

    You lost me at '...my favorite show Star Trek Discovery'. lol

  • @sam_6480
    @sam_6480 3 роки тому

    I think there's a typo in 26:44, m1 does not support thunderbolt 4 but thunderbolt 3.

  • @adalbertocaldeirabrantfilh3127
    @adalbertocaldeirabrantfilh3127 2 роки тому +1

    A friend of mine just told me there are some limitations in using RISC architecture in notebooks or desktop, is this true ?

    • @CodingCoach
      @CodingCoach  2 роки тому

      No, there are differences between them but nothing that would prevent use. Apple's M1 is RISC, so RISC is currently being used in laptops and desktops.

  • @GustavoNoronha
    @GustavoNoronha 2 роки тому

    Very interesting, thank you! I am a software developer and was never very interested in hardware until the M1 came out, so I've been learning a lot. Something you said had me scratching my head: Apple moved away from PowerPC (which I had forgotten was RISC) to Intel for... power efficiency? How did PowerPC screw this up if they had the superior instruction style?

    • @Teluric2
      @Teluric2 7 місяців тому

      Apple used powerpc that was simplified version of Power 4 IBM processor.

  • @mrjean9376
    @mrjean9376 3 роки тому +1

    Im subs! This is very great channel

  • @gaborenyedi637
    @gaborenyedi637 3 роки тому +1

    Not megabit, but megabyte. You say it many times.
    Another thing: this L2 cache seems to be an L3 cache to me, i.e. they just left out the usual L2. A shared L3 is not a big thing and 12MB is quite small, e.g. Zen3 has 32MB of L3 shared. But you say it's big. Why? Is this faster than that in Zen?

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Yes, thank you for comments, memory is certainly measured in gigabyte's not bits.
      Cache levels have no requirements for being shared, so while it is more common to have a L3 shared cache and L1 and L2 that are unique to cores this is in no way a requirement of L3. The only requirement of a level of cache is that it is between the next lowest level and the next highest or memory, you do not "skip" levels.
      There are plenty of examples of increased cache levels. System z's have 128 MB of L3. But remember this cpu is competing against i3 and maybe i5's. Apple has not released its higher end chips yet. I would personally hold off comparing it to the mid to high end chips until those are released.

    • @TheFredFred33
      @TheFredFred33 3 роки тому +1

      @@CodingCoach @gabor
      😄👌🏼 great great talks ! Perhaps I am wrong but biggest cache is not the graal. Big cache capacity and cache levels have their limit on performances and transistors use.
      Apple seems sized the things to provide a real impact. Not so big, not so small.
      Are you agree to say that the real Apple Designers talent is to find the smart balanced choices to provide the best performances.

  • @kethibqere
    @kethibqere 3 роки тому +1

    Very informative.

  • @mikejones-vd3fg
    @mikejones-vd3fg 3 роки тому +2

    Very cool

  • @felipe367
    @felipe367 3 роки тому

    Would it be worth getting 16gb ram for "future proofing " as I don't intend to hit the gpu hard but seeming ram is shared 🤨

  • @irisfailsafe
    @irisfailsafe 3 роки тому +1

    The test will be when the new Mac Pro comes. If it destroys a Xeon, like annihilate it then everyone will start developing SOC. However I don’t see Intel doing it. They will try to push x86 but unless they innovate they are in trouble. I see Nvidia as the company that can bring soc for Windows and Linux

    • @miguelpereira9859
      @miguelpereira9859 3 роки тому

      If nVidia is able to acquire ARM than I think they will take over

  • @skitzobunitostudios7427
    @skitzobunitostudios7427 3 роки тому +1

    You Rock, New Jersey EE Graduate from Devry (From 80s when it was owned by Bell Labs) . Great to Have Channels that arent just Screaming Kids and Cats Running from Cucumbers. But Dude "Star Trek Discovery" Blows

    • @CodingCoach
      @CodingCoach  3 роки тому

      thanks!
      I have to admit this season has not been the best.. but I really liked season 2.

    • @skitzobunitostudios7427
      @skitzobunitostudios7427 3 роки тому

      @@CodingCoach Personally..... I really like the way the "Orville" sort of captures the campiness of the first Star Trek. Even though Im not a ST Geek... I Think Sfi Should be Fun and not be over thought

  • @amjadtrablsi4051
    @amjadtrablsi4051 3 роки тому +1

    Great .....

  • @Isaqiu
    @Isaqiu 3 роки тому +1

    So that means the wider the execution architecture, the more it can do things in a single clock?

    • @CodingCoach
      @CodingCoach  3 роки тому

      yes, single cores can preform at superscalar speeds of more than 1 instruction completed per clock cycle. A wider architecture means this ratio is increased.

    • @Isaqiu
      @Isaqiu 3 роки тому +1

      @@CodingCoach wow thanks sir!
      umm, can i ask u one more question?
      Can we imaginized Execution unit or EU as a worker in a factory? So, the more we have EUs, the more we have workers?

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Yes, that is an okay analogy. The workers are specialized though, some are good at floating point math, others integer math. Some are good at loading or storing data in memory. The more workers you have the more complex the system to distribute the work has to be because the output of the factory must be in order (the program).

    • @Isaqiu
      @Isaqiu 3 роки тому +1

      @@CodingCoach ohhh okay, im so excited about this new chip! Thank you...

  • @Kefford666
    @Kefford666 3 роки тому +2

    He says b when it shows a B

  • @WayTooH1gh
    @WayTooH1gh 3 роки тому +1

    I am desperately looking for the manufacturer of dram of m1 chip. Is it being manufactured by tsmc as well?

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Sorry, I am not sure.. Maybe someone watching will know.

    • @Teluric2
      @Teluric2 3 роки тому

      It used to be hynix.

    • @WayTooH1gh
      @WayTooH1gh 3 роки тому

      @@Teluric2 thank you very much for reply :)

    • @WayTooH1gh
      @WayTooH1gh 3 роки тому

      @@CodingCoach couldnt find it. maybe it is tsmc this time. Thanks for the comment :)

    • @Teluric2
      @Teluric2 3 роки тому

      @@WayTooH1gh well memory chips inside iphone are hynix.

  • @user-ku1en9nh2s
    @user-ku1en9nh2s 3 роки тому +2

    日本人です!
    英語字幕付けていただきたいです!

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      私はそれを徹底的に調べます!このメッセージにグーグル翻訳を使用したエラーをお許しください。

  • @andrearyanta6445
    @andrearyanta6445 3 роки тому +1

    What is neural engine

    • @CodingCoach
      @CodingCoach  3 роки тому

      It is an area of the SOC that acts as an AI accelerator. It is designed to accelerate artificial intelligence applications, especially artificial neural networks, recurrent neural network, machine vision and machine learning you can read more here: en.wikipedia.org/wiki/AI_accelerator

    • @el-danihasbiarta1200
      @el-danihasbiarta1200 3 роки тому

      @@CodingCoach can we use the neural engine to help gpu for like image stabilizer on video editing, or motion tracking. because i always asking why they give 16-core for neural engine

  • @utubekullanicisi
    @utubekullanicisi 3 роки тому +1

    Why did you keep saying 16 gigabits instead of 16 gigabytes?

  • @el-danihasbiarta1200
    @el-danihasbiarta1200 3 роки тому +1

    if we look this SoC it looks so efficient, but how does it looks from software engineer perspective. i just hope that this is not became next sony cell processor. and how they use neural engine to speed up to compute task. i have been heard blackmagic has been make davinci neural engine and how this would help me as an video editor to work faster?? Thank you

    • @CodingCoach
      @CodingCoach  3 роки тому

      to be honest from a software point of view it's not really any different from the A series chips that have been used since the iPhone 5s. So far as the neural engine goes I haven't dug that deep into that subject but I believe apples APIs will allow developers to access it

    • @el-danihasbiarta1200
      @el-danihasbiarta1200 3 роки тому

      @@CodingCoach and i have one more question. does image signal processor could help on image processing on video editor or photo editor like raw video/picture data like from RED camera or from Canon since this is new on pc but common on phone. and if its could how do software developer use that??

    • @CodingCoach
      @CodingCoach  3 роки тому

      @DaniH I believe using APIs like the following: developer.apple.com/documentation/coreimage allow you access to this hardware. I am making this assumption based on Apples history and not on first hand knowledge.

    • @el-danihasbiarta1200
      @el-danihasbiarta1200 3 роки тому +1

      @@CodingCoach thanks for the info you got my subcribe button 👍

    • @piotrd.4850
      @piotrd.4850 3 роки тому

      @DaniH - "Neural Engine" - probably hardware accelerator for sum of mutiplications and matrix operation. Though having Cell redone in modern node and another thought into SDK....

  • @hiranthabandara6682
    @hiranthabandara6682 3 роки тому +1

    will a Risc V processor better than apple arm

    • @CodingCoach
      @CodingCoach  3 роки тому +1

      Interesting question, I am going to do a bit of research before giving a direct answer comparing the architectures. I will say however that I believe it will not make as nearly a significant difference as organizational choices will make no matter which RISC architecture is used. Things like cache, number of execution units, etc...

    • @hiranthabandara6682
      @hiranthabandara6682 3 роки тому

      @@CodingCoach But there are rumors about Riskv processor with 1W and 5GHz.. just google it. it should be a mind to the industry if it can be archive in a short time.

    • @Teluric2
      @Teluric2 7 місяців тому

      Absolutely, google kim keller interview. He said that he would get paid to design the best cpu it would have to be RISC V

  • @bobbyright2010
    @bobbyright2010 3 роки тому

    How big is your TV?

    • @CodingCoach
      @CodingCoach  3 роки тому

      The appropriate size for the wall it is on and my budget? Never been asked that before. I know.. bigger then when I was a kid :)

  • @piotrd.4850
    @piotrd.4850 3 роки тому

    Biggest "so what"? There's little if any software. Yeah, large part of silicon die is used by Neural Engine that reportedly equals in performance to high-end consumer GeForce cards, but none of the software takes advantage of it. Same goes for almost all other specialized hardware blocks - literally few tools and libraries and workflows make use of it. Compiling with --arm flag (if that happens) is not enough. Who's going to rewrite apps to be compatible with decade old GCD, specialized hardware blocks for running it for fraction of users of platform that has singile digit percentage of total market share? We also alredy know, that I/O of M1 SoC is very restricted - no TB4, storage that BARELY scrapes up PCIe 3.0 speeds while reportedly using PCIe 4.0. Apple - I think - provides backend for clang compiler, but to make this viable platform they'd have to redesign, reimplement and recompile major runtimes (.NET, Java, C/C++) _from_ scratch. Then rebuild whole toolchain with it. Anyway - Apple fixed a LOT of problems (thermals, noise, performance) of Macs with this transition alredy. Still, to realize full potential there's LONG way to go. Long and uncertain. These difficulties are handled better than by Intel, but it is _software_ that is lagging behind and WILL continue to do so. Basically, what we need is entirely new OS, preferably written in managed code ( Barrelfish and other MS project) with VM built STRICTLY for underlying hardware architecture and I/O. Throw away decades of bloat and newest stuff ( bottles, universal binaries, dockers etc) build something akin to MiniX 3.0 for razor thin hypervisor and hardware to run workloads on. Today M1 Macs start about as long as intel counterparts and this is ONLY software matter - same on Windows Machines - hardware already allows for single seconds launch from cold shutdown to working destkop and services running. Why isn't it the case?

  • @racistpixel1017
    @racistpixel1017 3 роки тому

    Microsoft have arm soc on surface x years ago

    • @niazm.sameer9088
      @niazm.sameer9088 3 роки тому

      Yep, but with the M1, it validates the idea of ARM on the Desktop

    • @amaledward2147
      @amaledward2147 3 роки тому

      Nobody cares what Microsoft does except moron who still game on PC

  • @LaurentLaborde
    @LaurentLaborde 3 роки тому

    i waited for Intel/AMD to make a powerful SoC since forever. I was very enthusiastic when AMD made their APU but it turned out to be very low-cost low-power and pretty much unusable. I was overjoyed when Intel talked about embedding an FPGA in their CPU, to find out it was embedded in a crappy Atom processor and impossible to buy anyway. And apple finally did it. I hate apple for non-technical reasons... and I'm writing this comment on a M1 and I'm happy about it... I'm now learning swift, apple's frameworks, apple's tools, ... I don't want to, but it's worth it. The M1 is absurdly good. I don't even care about performance/watt or how many days I can use it without recharging it. But here we are : it's powerful, silent, flawless. Hopefully, they'll release an even more powerful M2XXL many-core beast to do as much math as I can imagine :)

  • @showtopboxcouk
    @showtopboxcouk 3 роки тому +1

    Subbed

  • @pliniopaolinelli
    @pliniopaolinelli 3 роки тому +1

    Nice video!

  • @carlhopkinson
    @carlhopkinson Рік тому

    16 GigaBYTES not BITS.

  • @alex-thangnguyen2746
    @alex-thangnguyen2746 3 роки тому

    Discrete or Non-discrete processing? Do we want to reinvent the wheel in one snapshot? Have I scratched my head about this in the past professor? I went on after college level computer science to National Security hardcore business. I was forced to become a hardcore hardware guy that was forced to think this out in more carefully. Intel or Microsoft might switch from a CD disk to a new ARM cpu for their motherboards to wipe out the M1 cpu. We can put the entire internet on a home PC no problem. So the M1 is not a threat to the computing world, only the hardware world. The same architecture will annoy the M1 camp now. I DO ADMIRE APPLE for some odd reason but they had to convince too many onlookers, where are you taking us now! As a computer science student in 1988, I began with an Apple and in 1988, myself and a few guys began to overclock them with Orange California kits made for the Apple MacII. Is Apple going to change Microsoft's mind on how they do business? Over the years, I began to see computers in more practical ways when I began to do computer security. Cost always rules over consumption, except in the world of computer security. Therefore, Apple has some advantages over the retail market but most people today will not shell out $1000 when a laptop costing $200 will suit them perfectly. In terms of desktops, if you want to get stuck with the M1 for ten or twenty years, then it is a good choice in a very fast and changing environment. I do not think Intel will be brought to their knees or face bankruptcy for being accused of elitism. However, for those who have $1300 right now and want an M1, it makes a lot of sense. One chip is not going to decide the fate of computers, we have been there and done that before, similarly, the old rivalry between the x86 and ARM has annoyed many industry leaders to question their idealism and theoretical reality. If Apple can price the M1 at pennies of what Intel can, they may change some suspicious minds, until then, I don't see the cost of security becoming a nightmare we all have to live with. Do I admire the M1 reintroduction? We will see because it is so hard to penetrate the computing world nowadays if you were a startup company.

  • @chuckpatterson8895
    @chuckpatterson8895 3 роки тому +1

    Intel is done.

  • @vartannazarian3451
    @vartannazarian3451 3 роки тому +1

    You can find bench of gpu and cpu. It is like 3600 Ryzen 5 and GPU under 1030 1050 NVIDIA. But it's Arm and pro musician, pro 3D game dev they can't work with this. All music software dev say it's garbage to making software like Cubase Fruity Loops and Pro tools like x86 on M1

    • @CodingCoach
      @CodingCoach  3 роки тому

      I have seen some of the benchmarks, it will be interesting to see how things change over the next week as consumers start testing. I have no doubt that the current software landscape is x86 focused for many industries.. Will be interesting to see 1. How rosetta 2 does, 2. If ARM software that was previously mobile focused "graduates" up to desktop class.

  • @Tapajara
    @Tapajara Рік тому

    Apple isn't constrained to using an industry standard architecture because they control the whole stack. And because of that people like me will never buy an Apple product. I control my whole bank account. I won't even spend a penny on Apple + video.

  • @MisterTechnologic
    @MisterTechnologic Рік тому

    Oooh gotta dislike just for Discovery. It’s the worst one haha. Jk any Trekkie gets a like from me - but still rethink your life choices 😂

  • @selfhelp9175
    @selfhelp9175 3 роки тому

    I would rather listen to the head engineer at Apple who invented it than some washed up philosophy major professor.

    • @el-danihasbiarta1200
      @el-danihasbiarta1200 3 роки тому +2

      and why youre in here??

    • @piotrd.4850
      @piotrd.4850 3 роки тому

      Apple did not invent anything.

    • @Teluric2
      @Teluric2 7 місяців тому

      M1 wasnt designed by Apple engineers , an External team of chip architetcs were hired by apple, they worked for AMD.

  • @haezhao919
    @haezhao919 3 роки тому

    The beautiful leo collaterally brake because cultivator rarely label athwart a devilish puppy. unused, placid guide