Sophie Wilson - The Future of Microprocessors

Поділитися
Вставка

КОМЕНТАРІ • 86

  • @johnsim3722
    @johnsim3722 7 місяців тому +2

    Sophie Wilson is a legend. Nobody else has influenced processor design as she did with the ARM. And her quips about Brexit were spot on too.

  • @tangentfox4677
    @tangentfox4677 Рік тому +2

    It's funny how this is still super relevant despite being 6 years old.

  • @klaxoncow
    @klaxoncow 7 років тому +22

    Yes, Sophie, thank you for an amazing talk. :D

  • @rdoetjes
    @rdoetjes 4 роки тому +9

    I recently got back into assembler. I’ve done 6502 as mid than did 68000, a step back to Z80 and 8086 at school. When I graduated college I became a developer. But it’s all very high level languages. The lowest I get is C. So as an academic and retro trip, through memory lane, I started to do assembler at home again. And for some reason I got tickled by the ARM. I remembered a colleague of had a RISC OS System back in 1997 and I had an Alpha (he too moved to alpha after that). But he wrote Boulder Dash in assembler on his than 6 year old RISC OS machine. And I remember being really impressed by that OS. It booted from rom and some drivers Only came from disk and it’s all written in assembler. So I started looking into ARM assembler only recently. And Alpha was already brilliant in its design and so much easier. But ARM now is almost like a giver language. I love the instructions readability and fixed length, it’s very predictable. The fixed length can be quirky with immediate assignment but it’s logical. It’s really quite an impressive little cheap CPU and easy to program lowlevel.

  • @llaith2
    @llaith2 6 років тому +12

    Superb. I love the ease with which Sophie explains micro-processor matters. This is the clearest explanation of what are the factors that are behind the faltering in Moore's law that we've seen over the last decade. Thanks Sophie!

  • @Frisenette
    @Frisenette 7 років тому +18

    It's great to see she is still going strong.

  • @paulmilligan3007
    @paulmilligan3007 4 роки тому +2

    Coming to this in early 2020, I thought it was going to be boring but it is worth watching through.

  • @martinda7446
    @martinda7446 6 років тому +2

    Sophie Wilson, my hat is off. Wonderful talk, full of fascinating insight and history delightfully delivered. My hero (non sexist usage).

  • @thrillscience
    @thrillscience 7 років тому +7

    Great talk! Thanks Sophie Wilson and Erlang Solutions

    • @vfclists
      @vfclists 7 років тому

      I knew her as Roger Wilson a long time ago. I even got "his" autograph

  • @junkerzn7312
    @junkerzn7312 4 роки тому +9

    Still relevant today, a nice video. I grew up writing machine code on the 6502 (not even using an assembler until I wrote one later on... entered the hex directly in the machine language monitor!). There were many tricks that one could play get past the 8-bit data width limit by using self-modifying code and the two indirect memory modes. The best thing about the processor was that the instruction cycle was 100% deterministic, so you could actually transfer data between two machines simply by synchronizing the two processors at the start and then shoving data out the I/O (and in the I/O on the other system) without any further handshakes. Literally as fast as the CPU could issue the writes.
    The part about more of the silicon having to be dark turned out not to be true. Right now its something like one gate out of five (20%) for high density logic, and the issue is more about signal integrity and not so much about power density. Power density for the chip as a whole is mitigated by the cache-to-logic ratio. The power density of the CPU data & and instruction cache logic is very low whereas the power density of the compute logic is very high, so the area ratio between the two can be used to regulated the power density of the die as a whole without having to resort to adding tons of dark silicon.
    Since modern CPUs need huge caches to operate efficiently, it turns out to work quite well. The early CPUs had no data or instruction caches at all.
    -Matt

  • @snowymwah
    @snowymwah 7 років тому +8

    Great talk! Very interesting.

  • @TheDavidlloydjones
    @TheDavidlloydjones 7 років тому +5

    What an uncommonly sensible and intelligent woman! She refuses to fall into the logical error of reification, in this case the reification of Moore's Observation.
    The most commonly seen and heard variety of this error is probably the asinine "We ended up in the nice comfortable place we are today because of reversion to the mean." Reversion to the mean is a powerful engine with exactly the same amount of driving force as Moore's "law." None.
    She is also a huge example of that olde rule, There's no limit to what you can do if you don't care who gets the credit. The whole lecture is littered with her generosity to her coworkers, to other innovators, and to people over on the hardware side. What a winner!

    • @rich1051414
      @rich1051414 4 роки тому

      "Reversion to the mean" Is absolutely meaningless in anything but hindsight. We cannot know what the 'mean' even is. Such a meaningless statement...

  • @garyproffitt5941
    @garyproffitt5941 2 роки тому +1

    Blimey I'm impressed with silicon wafer chips and hello Computer World engaged in amazing Sophie Wilson✔.

  • @qbradq
    @qbradq 4 роки тому +1

    Fantastic talk from a great speaker!

  • @mouseminer2978
    @mouseminer2978 4 роки тому

    Keep it up. More Nostalgia is all we need.

  • @zetaconvex1987
    @zetaconvex1987 4 роки тому +1

    A fascinating presentation.

  • @alexandrugheorghe5610
    @alexandrugheorghe5610 7 років тому +4

    There is one more fundamental problem that comes from Quantum Mechanics: even if let's say we could go around the thermal issues with better materials like nanotechnology that's improving we will still get to a limit due to tunneling if the gate becomes really narrow.

  • @jburr36
    @jburr36 6 років тому +3

    one thing not mentioned was the issue of EMF radiation issues within the processor. This causes design issues too.

  • @LeandroCoutinho
    @LeandroCoutinho 3 роки тому

    Amazing talk!

  • @jimreynolds2399
    @jimreynolds2399 Рік тому

    Roger that Sophie!!! Great talk 🙂

  • @ginahagg3097
    @ginahagg3097 7 років тому +9

    most fascinating talk. Can't get away from Amdahl's law..

    • @Objectorbit
      @Objectorbit 6 років тому

      It's a bitch. The single worst(For lack of a better term) thing standing in the way of so much scientific progress on the computing side of things. Even more so than thermals and Moore's Law(Which is saying something!). It's a hell of a thing, standing in the way of so much, you know?

    • @jburr36
      @jburr36 6 років тому

      You can if you make different types of processors which are not constrained by the same issues. I can foresee optical processors in the future. Processors which use a single light conductor to conduct 100s of different colors in the spectrum which would replace multi-lane bus strips.

    • @Inertia888
      @Inertia888 4 роки тому

      @@jburr36 do you think the interaction of waves would be a problem?

    • @ffmfg
      @ffmfg 3 роки тому

      You also can't apply an old law to modern circumstances blindly. Check this article for example: www.hpcwire.com/2015/01/22/compilers-amdahls-law-still-relevant/

  • @AlainHubert
    @AlainHubert 3 роки тому +2

    Very interesting. What I'm taking from this is that, even though hardware has advanced into multi-cores, software hasn't followed up. Maybe we would need a new high-level parallel programming language so that we could finally get rid of obsolete sequential C++, and take advantage of actually using all of those available cores that we pay so much money for? We can use computers to help us design very efficient hardware, why can't we use computers to help us design a better, more efficient, high-level parallel programming language better suited to current technology? C is a nearly 50 years old programming language. It equates to using binary toggle switches to program an Altair 8800 in machine language all over again.

    • @Theineluctable_SOME_CANT
      @Theineluctable_SOME_CANT 2 роки тому

      C is old but it is so basic and elemental that you can't call it primitive. Elegant is a better word.

  • @w0ttheh3ll
    @w0ttheh3ll 4 роки тому +1

    35:00 amazing slide. what I'm missing is cost per instruction per second, though.

  • @BruceHoult
    @BruceHoult 4 роки тому +2

    When Wilson says "We'll give you lots of cores but most of them will be turned off to avoid overheating" that's kind of true but it's also failing to join up the dots of some of the other points she made individually. First, you *can* run all the cores if you reduce the voltage and frequency. For example the i9-7980XE can run all 18 cores continuously forever at 2.6 GHz, or one core at 4.4 GHz. Obviously if you do this then 18 cores are not 18 times faster than 1 core .. it's more like 10.6x faster. So why not just lock the thing to 2.6 GHz and call it a 2.6 GHz chip? Well, because Amdahl's law. If you take that 95% parallel, 5% serial task and run it on an 18 core processor at 2.6 GHz then it's 9.7x faster than a single core at 2.6 GHz, or 5.75x faster than a single core at 4.4 GHz. BUT ... if you run the 95% at 2.6 GHz and the 5% at 4.4 GHz then you get 7.16x faster than a single core at 4.4 GHz or 12.12x faster than a single core at 2.6 GHz. In the limit, you *can* get a 20x speedup compared to a single core at 4.4 GHz by adding enough cores for the parallel part, even if you have to run them at 2.0 GHz or even 1.0 GHz.

    • @patdbean
      @patdbean 2 місяці тому

      Since this talk they (asml) have gotten EUV lithography working. That explains the 7nm and smaller feature sizes.
      The parallelism is still an issue, you may have 64 cores, but how main of them are active at any one time in a desktop PC?
      In a server , yes. A web server could be sending 64 pages to 64 clients using 1 core to render each page. a bit like the example she gives with the firepath chip runing 36 lines of ADSL. That works because they are 36 totally independent tasks.

    • @BruceHoult
      @BruceHoult 2 місяці тому

      @@patdbean a desktop machine being used by a developer can easily make good use of 64 or more cores, depending on the software being developed. For example both the Linux kernel and LLVM have good build systems that can use a large number of cores -- certainly a lot more than 64. On the other hand, even on a machine with 64 cores, building GNU software such as binutils, gcc, and glibc averages around 9-10 cores in use, alternating between using alllll the cores for a few seconds, and then using just 1-4 cores for a while. Since my previous message where I had an 18 core i9-7880XE I've moved to first a 32 core ThreadRipper for five years and more recently a laptop with an i9-13900HX with 24 cores which can run a single core at 5.4 GHz (they claim ... I've seen 5.3 GHz reported in task manager in reality) or all 24 (and 32 threads) at around 4.25 GHz until it gets hot at which point it drops to 3.95 GHz. So a nice advance on that 2017 machine, even if you ignore the new machine being a battery powered 2.5 kg portable while the 7980XE and ThreadRipper were both 20+ kg water cooled tower machines.

    • @patdbean
      @patdbean 2 місяці тому

      @@BruceHoult all true, but you can only use all those cores at once because.
      1. You are running many tasks at ones (as in the ADSL and web server examples).
      And ord 2. You are running a task that is in itself parallelizable, as in the example given in the talk of ray tracing.
      For most users it is just about the law of diminishing returned.
      1-2-4 core over 1995-2010 really worth having. 4-8 ,maybe and 8 to 16+ less so...... Unless you have a workload that can use them all. Video rendering/editing , ray tracing , compiling/linking.
      Basically for MOST users it is just an example of the law of diminishing returns.
      I once worked on OCR systems, scanning printed text in, to convert to braille.
      1 to 2 cores 80% speed up.
      2 to 4 cores maybe another 25%
      But it would still read just as many "S" as "5" "rn" as "m"
      "be" as "he" etc etc.

    • @BruceHoult
      @BruceHoult 2 місяці тому +1

      @@patdbean MOST users is a moving target. My gf will happily tell you she knows nothing about computers, but like many millions of people she uses a fairly sophisticated video editor (VN in her case) on her phone to make videos for uploading to TikTok and Instagram. I don't know the details of that app but I expect it probably uses all the cores you have. Also "gaming" computers (my i9-13000HX laptop is sold as one) now have 16+ core CPUs and I gather that at least some games benefit from this, not only from a fast GPU, so this is not simple vanity marketing. That is, again, perhaps not "most" users, but it's a very sizeable minority.

  • @LewisCollard
    @LewisCollard 5 років тому +4

    12:35 TIL that RISC is actually Reduced Instruction Set Complexity (not Computer) and that makes more sense! Such a great talk :)

    • @hyperbole5726
      @hyperbole5726 5 років тому +2

      ah yes, CISC = Complex Instruction Set Complexity

    • @FlyingPhilUK
      @FlyingPhilUK 5 років тому +1

      @@hyperbole5726 Yeah - exactly
      - RISC is Reduced Instruction Set Computer (not Complexity)

    • @robertmaclean7070
      @robertmaclean7070 4 роки тому

      @@FlyingPhilUK See the IBM RS/6000 family of RISC computers developed in England.

    • @FlyingPhilUK
      @FlyingPhilUK 4 роки тому

      @@robertmaclean7070 Why?

  • @BruceHoult
    @BruceHoult 4 роки тому +1

    A bit of a mis-speak there saying the 6502 32 bit add example takes 26 clock cycles. A rule of thumb for 6502 is that the execution time is equal to the number of bytes of memory read or written -- with a few exceptions that take a little more. Those LDA, ADC, and STA instructions all take two bytes of code plus the data byte read or written and so take no fewer than 3 clock cycles each (in fact exactly three). There are four each of them, twelve instructions, so that's 36 clock cycles. Plus two for the CLC, for a total of 38 clock cycles, not 26.

    • @BruceHoult
      @BruceHoult 10 місяців тому

      @@Ignat99Ignatov 6502 has a single bus. The first byte of the next instruction is fetched during the execute cycle of the previous instruction, providing some overlap. The execution times are as I stated.

    • @BruceHoult
      @BruceHoult 10 місяців тому

      @@Ignat99Ignatov I’m talking about the external bus. No matter what happens internally in the CPU - and yes I’m sure there are multiple internal buses - the execution time of a program in clock cycles physically can not be less than the number of bytes of code and data read or written.
      I know and have worked with those top Russian students, or at least the ones employed at SRR.

    • @BruceHoult
      @BruceHoult 10 місяців тому

      @@Ignat99Ignatov so I watched the video and it's not a very complete description. Fundamentally the "special bus" (SB in the docs) is not all that special. It's mostly just input A for the ALU (unless 0 is desired as that input) while input B for the ALU comes from the Data Bus (DB) or its complement. SB can be sourced from S, X, Y, A, or the previous cycle's ALU output (via hold register). SB can be enabled into the ALU input A, S, X, Y or A (via the DAA circuit), or also on to the ADH (address hi) bus or on to the DB. The DB can be sourced from A, P, PCH, PCL, the input data latch (DL), or as mentioned the SB. DB can be enabled into only ALU input B (optionally inverted), P, or the Data Output register (DOR).

    • @BruceHoult
      @BruceHoult 10 місяців тому +1

      @@Ignat99Ignatov I haven't heard that story about 2012! I'm from New Zealand. I worked as a remote contractor for SRR from July 2014 to March 2015, and on site as an employee in the Dvintsev centre from April 2015 to March 2018. Then I was at RISC-V company SiFive in California. Now starting contracting for SRR again as they are starting RISC-V projects.

    • @BruceHoult
      @BruceHoult 10 місяців тому

      @@Ignat99Ignatov sadly, it appears you don't know what RISC-V is.

  • @mattyfrommacc1554
    @mattyfrommacc1554 7 років тому

    fascinating, I owned a Beeb and of course have a modern ARM 1+ Ghz quad core in my now very out of date Samsung S3 :) l appreciate the anti-brexit references lol , Steve Furber was also doing it in his talks , we live in "interesting" times ..

  • @DumpsterFire2048
    @DumpsterFire2048 7 років тому +16

    im just going to be honest, ive never used a computer.

    • @mattyfrommacc1554
      @mattyfrommacc1554 7 років тому +1

      lol then maybe this isn't for you :D

    • @jburr36
      @jburr36 6 років тому +5

      until just now when you watched this video and posted the comment, right?

  • @denni_isl1894
    @denni_isl1894 4 роки тому +1

    27:30 parallel is needed

  • @Yanus3D
    @Yanus3D 3 роки тому

    4 years later and a lot happen....

  • @monetize_this8330
    @monetize_this8330 5 років тому

    I guess photonics isn't going to solve the scaling / power issues anytime soon.

  • @sonofhendrix1618
    @sonofhendrix1618 7 років тому +9

    Damn nobody asked about optical light processors....

    • @dannytun
      @dannytun 6 років тому +6

      I am also interested in the former Dutch colony of Surinam

  • @pauldusa
    @pauldusa 4 роки тому

    Gordon next should think of more cars = more lanes.. when I was working for Ampex Corp in Redwood City Ca back in 1987-89, with great other's about 400 engineers back then. I was using "Forth" If I remember ok on the Rockwell 6502F as on a personal project, The old days when separate micro, rom, ram & latch chip FF was needed times ! now I use the Arduino. Sams, Xplained, Esp , ect..Embedded. A personal always challenging fun past, I did travel Far, now I much older, but a great teck eng life..

  • @persona2grata
    @persona2grata 4 роки тому

    I don't have the technical or mathematical expertise to challenge her conclusions in this talk, but it does seem like what we're seeing in the industry contradicts some of what she's claiming. If I'm understanding her arguments correctly she appears to be claiming that speed and power gains below 28nm won't improve computation the way it used to, not even when adding more cores to run further in parallel. But here we are with 7nm chips coming off the line with 64+ cores and we are seeing speeds increase and power consumption drop, so I'm unsure where that gap is.

  • @ProfStuartHalliday
    @ProfStuartHalliday 3 роки тому

    How's your back these days Sophie?

  • @rabidbigdog
    @rabidbigdog 6 років тому +3

    Sophie is any incredible figure in computing. I am disappointed that she seems a bit down on the 6502. For its time, it was amazing, by 1982, for sure there were alternative paths.

    • @skilz8098
      @skilz8098 4 роки тому +1

      Especially when it was "hand designed"!

    • @jordanhazen7761
      @jordanhazen7761 4 роки тому

      She was comparing later and present-day chips to the ~1975 state of the art, citing the CPU of that era with which she was most familiar, from her early Acorn days. The 6502 was quite cleverly designed given those constraints, more efficient clock-for-clock than an 8080 despite having a bit more than half its transistor count, but a few compromises did have to be made, like the hardwired 256-byte stack. Chuck Peddle has given some interesting talks touching on these design trade-offs, worth looking up...

  • @ProfStuartHalliday
    @ProfStuartHalliday 3 роки тому

    Rather annoying that the Slides were delayed

  • @NathansHVAC
    @NathansHVAC 5 років тому +2

    This tradcon says Sophie is awesome!

  • @nicoblaytherealflamingo445
    @nicoblaytherealflamingo445 3 роки тому

    Optical light. Pfft, car high beams can transmit so much power for a persons eyes if emf is stabilized in persons body mainly eyes to receives the signal to produce sight. Stop using it around 20/20, more so the person will see manipulative fragments or actual people walking through walls because colors shade looks so transprent its most likely to turn subject into overuse like that girl robot show from the 80s. Or, we did it. Pay me, teach me, kill me before the villain becomes over the top.

  • @florin604
    @florin604 6 років тому +1

    Really sad news :(

  • @ArnaudMEURET
    @ArnaudMEURET Рік тому

    I feel embarrassed for her that she completely avoids talking about GPUs. 😢

  • @musaran2
    @musaran2 5 років тому +8

    Came for "The Future of Microprocessors".
    Got a sloooow talk about mostly past processors. :(

  • @johnknight9150
    @johnknight9150 3 роки тому

    Nana-metres sound delicious.

  • @stevenyamada70
    @stevenyamada70 4 роки тому

    Finkle is Einhorn, Einhorn is Finkle !

  • @mrrolandlawrence
    @mrrolandlawrence 5 років тому +8

    as fascinating as the talk is... the title of the video should be the history of processors from an acorn perspective. noting that the slides show 2024 for the 8nm process. we have passed that already...

    • @paulmilligan3007
      @paulmilligan3007 4 роки тому +8

      Roland Lawrence It’s not Acorn it’s ARM which is 99% of mobile devices and she has 7nm as in production 35 mins into the presentation. Don’t put people off, it’s worth watching to the end.

  • @BM-jy6cb
    @BM-jy6cb 4 роки тому

    While Wilson is obviously an expert in chip design, I couldn't help thinking of minions whenever she mentioned "narnametre"

  • @BAZFANSHOTHITSClassicTunes
    @BAZFANSHOTHITSClassicTunes 4 роки тому

    Oh. So thats roger wilson is it.

  • @spudhead169
    @spudhead169 3 роки тому

    When I can buy an ARM CPU, motherboard and RAM and make my own system. That's when I'll take ARM seriously. Until then, it's a gimmick to me. Inflexible and system locked SOC garbage.

    • @patdbean
      @patdbean 3 роки тому

      Not just used in SOCs, server grade ARM chips with up to 80 cores now available ua-cam.com/video/23Md5K9D0Q4/v-deo.html and Amazon's graviton ua-cam.com/video/KLz8gC235i8/v-deo.html

  • @Impedancenetwork
    @Impedancenetwork 6 років тому +1

    What is so amazing is that Sophie was born a man. A MAN! I don't know when he became a woman but I was shocked to find out that early photos of him at ACORN clearly show her as a him.

    • @jamieh9351
      @jamieh9351 6 років тому +3

      How is that amazing?

    • @FlyingPhilUK
      @FlyingPhilUK 5 років тому +1

      Sophie as a man (aka Roger)
      ua-cam.com/video/KKTa54UikgE/v-deo.html

    • @Sakura_Shadows
      @Sakura_Shadows 3 місяці тому

      So what? Trans people and sex changes are nothing new. They have just been sensationalised by right wing religious nut jobs in recent times.
      If her old friends of the same age like Hermann Hauser and Steve Furber can accept it then you can too.

  • @MrFarnanonical
    @MrFarnanonical 5 років тому +4

    I love how some of the greatest female computer scientists are men.

    • @ErlangSolutions
      @ErlangSolutions  5 років тому

      We'll respectfully disagree here! We had Miriam Pena last year listing the unsung heroes as well - ua-cam.com/video/3_tC52JNASo/v-deo.html

    • @bizzzzzzle
      @bizzzzzzle 5 років тому

      Erlang Solutions I think you misunderstood. They were calling her a man.

    • @Sakura_Shadows
      @Sakura_Shadows 3 місяці тому

      She's not a man though. She was and always has been a women, but born in the wrong body at the start.

    • @sa3270
      @sa3270 День тому

      Oh no don't tell me...!