@@axelnils does the first week of a new year always start on the same day of the week? Like a Sunday or can it be different set by Jan 1? I like the concept.
Very good video. As a retired HDD firmware engineer, I can tell you that storage devices had systems on a chip (multicore). In fact everything on an hdd has gotten smaller, use less power, and faster over the decades.
"I am not a chip designer" - never thought I'd hear that disclaimer, hooray for engineers :) IAACD (I am a chip designer) and this was a really good outside view of why SoC so good job all. Only addition I'd make is that it used to take a couple of engineers about 2 months from inception to prototype to design a new computer (what I did before SoC), a few thousand quid for a handful of prototypes and you could test and fix it in "real life" with a soldering iron until it was working and start selling the final product a few months later. With SoC it takes more than 2 months just to very precisely specify what it does and probably more like a team of 12 engineers 2 years and a few hundred thousand quid to getting a chip that has a chance of working and there's no bodging it with a soldering iron when it doesn't so add another pile of cash and 6 months+ if you got it wrong, which is why most technological changes are now evolutions not revolutions.
Not to mention geting capacity in a sufficiently advanced foundry with the right production tech, and apparently you can't just go to a competitor , at keast nit uf you want the latest and greatest tech
As far as signaling, all of this is true, even without taking capacitance and em interference into account, which are far bigger problems for high frequency signals, and are much easier to manage / plan for in an SOC.
Actually I think you can go back a bit further, because in the early/mid 1980s there already were quite a few MCUs that combined CPU with timer(s), A/D converter, UART, etc.
SoCs are distinguished from microcontrollers by being at the heart of a PC or PC-like system. Microcontrollers typically integrate volatile and non-volatile memory. You could argue about terminology all day, but the intent is different between SoCs and MCUs.
@@tackline No, I am saying that a custom ASCI'S are the same thing as a SOC ,as they contain all the necessary building block circuits that that are needed for a main function of a device such as test equipment, big screen TV's , network appliance, network routers, and special medical devices
@@johnsenchak1428 ASIC's were the precursors to SOC. Like most technology, everything builds on the previous generation. PAL's were the precursors to ASIC's, and are still in use today. I remember, back in the 80's, using ROM chips as cheap PAL's, once you finalized your design. And using EPROM's while in development.
Yes, the 80188 and 80186 from 1982 were microcontroller versions of the 8088 and 8086. They had on-chip timers, DMA controllers, interrupt controller, clock generator, and wait state generator. No UARTs or A/D converters, though. There were plenty of 8051 variants with UARTs and A/D converters and lots of other goodies.
Thanks! Very interesting discussion of the differences and pros and cons. The 3GHz track length thing was fascinating! Hadn't appreciated at that speed we were getting into such considerations.
Nowadays if you want to customize the entire thing down to the cpu, you can always plonk down an fpga, use a soft ARM or RISC-V core and build the rest of your custom circuity into the firmware without ever needing to design and produce a custom physical chip.
@@anujmchitale It's worked wonders for the startup I've been a part of though. No custom chips or socs that limit choices of what other hardware you can attach. Just a big fpga that does all the custom signal handling and you're set.
The RAM on the new M1 Macs isn't really "on the chip" - it's still a soldered-on component. It is in fact possible (albeit via extremely specialized work) to upgrade the memory on an M1 device.
@@jpdemer5 True, but the fact that it is not in the same die doesn't change the characteristics in my opinon - it is still simpler, faster, less flexible, and non-upgradeable (for practical purposes). What most people consider a "chip" is the package anyway, although I agree technically speaking it is "System-in-a-Package".
@@leomuzzitube that's the thing: the RAM is on a separate package, which is then soldered on top of the CPU. They are two separate packages. That said, most SoCs do have a decent bit of internal SRAM, which is great from a bootstrapping perspective (DRAM tends to need to be "trained"), as well as a security perspective (raises the barrier to entry for attackers).
The RAM is on the same chip but on a separate die within the chip, if that makes sense. It's what we would call separate chips back in the day but now it's packaged together in an enclosed shell which looks like a classical chip at first glance. We now call an integrated system on the same silicon wafer - a die, and the package which may contain several dice - a chip.
ARM250 based Archimedes ( A3010, A3020, A4000 ) can be very easily overclocked up to 24 Mhz thanks to faster DRAMS. It's all detailed on the Stardot forum.
That's entirely possible, a lot of that sort devices (which are effectively computers in their own right) make use of ARM socs like the one you'll find on a raspberry Pi. Edit: did a bit of digging, the soc in the original pi (v1) was indeed also set in set top boxes like the first gen Roku. The soc in the latest generation (pi 4b) is more or less made for the pi but very much resembles a different model Broadcom (the SoC manufacturer) also makes which is once again intended for set top box usage.
There is also a tremendous speed and power advantage to routing high fan out peripherals on chip vs. off chip. Anytime you have to pass a signal through a pad driver and externally, the driver power goes up and the speed of the line goes down. One of the great drivers of SOCs were the making of cells out of standard design blocks such as CPUs. To do this, silicon designers would separate the cells (CPUs, peripherals, etc) from their pad ring drivers. Thus a single cell design could be dropped into a SOC using multiple cells internally, or drop into a pad ring to make a stand-alone design.
One more advantage of a SoC, is the power efficiency. Transfering data uses more power than the actual calculations made in some discrete systems. Having the data close, reduces that energy requirement. Also latency is reduced. This also translates to cost savings for cooling solutions. And weight savings for mobile devices.
@@Freshbott2 There are multiple factors. It's harder to plan for the demand of a more specific piece of circuitry. And more complex SoC's do use more die space, which reduces yields in the production. A modern Pc is close to a SoC. But since it's rather complex, it's more economical to leave some aspects modular. So at the moment, it's either a hybrid approach, a smaller SoC or a very big production run of a company that can dictate what the market demands. For example apples M1 chips are such a case. They can afford to ignore every customer who's needs are not fully met.
@@heyarno I don’t really think that’s it though cause whether it’s a Mac or PC laptop of whatever kind it’s already not meeting the majority of people’s needs fully. The only people who are fully met are the ones an SoC wouldn’t benefit. Forget all the different encoding and enclaves and DSPs etc. just theefficiency gain from moving to true integrated graphics is what people want. Everyone wants that MacBook body.
I also would like to add, that soc is not only about computational power and exact circuits on chip design(features and so on) but also end user devices manufacturing processes. The size of exact circuits on chip is way smaller that final chip “package”, because the package used not as a shell only but also as heat conductor and array of contact points. There also cons and pros going from this fact.
The 1984 Commodore/MOS "TED" chip was a sort of early SoC, prior to any ARM chip. Please, some credit for Commodore, MOS/CSG, etc. What insane obsession with Apple/ARM.
@@RagHelen LOL.. radius is not the same thing as field of view. Like I said, go back and repeat 5th grade. Maybe this time you'll learn what radius means. Maybe.
@@morpheus6749 it was sort of obvious what he meant, and while he was wrong, you could have put it more nicely. Your little moment of "superiority" there just shows how badly adjusted person you are
studying EE is not worth it imo unless you want a job in academics or some fancy prosthetics firm. most of what i learned about engineering (digital circuits, microcontrollers, etc) i learned either on the job or DIY style. most complicated formulas from my studies i actually used was capacitor charging curves....
@@QuantumFluxable I have no choice since its either EE or CS in my college. There is no Computer engineering, and I want to learn more about hardware design ( VLSI design, embedded system).
I recall reading about an open source project years ago for a SoC that implemented an IBM PC, though with VGA rather than CGA or MDA and no FPU option.
yes and no. I mean, guessing it isn't a complete system so no, but presumably it allowed them to create one chip that did the work of what would otherwise be several different ones. So while it wasn't a complete system presumably it had at least some of the advantages of SoC around cost/complexity saving, at the alternative expense of needing to design (the final manufacturing stage of) the ULA chip.
Given how many more transistors that, say, the Z80A of a Spectrum has than the Spectrum ULA, it's a shame that SoCs didn't come about in around 1976. Put a DRAM-compatible interface on a ROM/PROM and a relatively cheap computer could have been built with a 40-pin SoC.
In a way they'd already started down that road with the Z80A--the CPU there included the capability to refresh the DRAM in the computer, which previously had been the job of additional circuitry on the board in earlier microprocessors.
The primary design constraint in GHz+ PCB design isn't propagation delay but the fact that those are microwave frequencies. Traces act like transmission lines, not wires, and fast edges generate very high frequency harmonics. You can wave away impedance matching at the MHz clocks speeds of 1980s/90s machines, but not now.
This must have been the kind of thing that totally makes sense in hindsight. Of course you're not going to run a bunch of traces that are unnecessary if you just cram those modules together.
It's made sense since the first integrated circuits at the tail end of the 1950s. It just took time for it to become cheap enough to put everything on a single chip.
The ARM250, which you found in the A3010 ran at a clock speed of 12MHz vs the ARM2's 8MHz, but while the latter only managed 4MIPS, the former ran at 7MIPS, and that extra 1MIPS of speed was, in part, down to the fact everything (aside from the memory) was so tightly integrated on the ARM250 SoC!
I can't see how that would make any difference. It's literally all four of the original design placed on a single die, without so much as bothering to rework the layouts so that they fit together better. I can believe in better compiler optimisers (MIPS is measured from a particular C benchmark) and for the same screen mode, the video will be using a smaller fraction of the total bandwidth.
hey @Computerphile ,I really hope that you will be discussing next time about Quadtree and its use in image comparaison or sorting spatial data (the main purpose of the invention of quadtree in 1974) ! Thanks in advance ;
I believe that the future Low Cost Computers for businesses and home use will all be SoC systems. It's an easy way yo reduce the cost, yet provide the functions the majority of people will want.
Great channel I have downloaded all its videos with an external UA-cam download program. Just Wikipedia for the article: Comparison of UA-cam downloaders.
The modern x86 processors are kinda SoC too - built in cache, built in memory controller, built in IO PCI-Express bus controller etc. Compare to even 20?25 years ago, there was a 'north bridge' interface for memory controller and 'south bridge' for IO. If RAM serves, even kind of until the Intel Core 'i' era. On certain even older earlier ones (386, 486?) even cache was external. If nowadays a gpu is built in, all you need is RAM and storage and it feeds video output directly from the chip.
Oh man, 1987! Awesome. Not on topic, but... I wish that I still had my old commodore 64 with it's floppy disks and cassette tape storage to show my teenaged son.
I guess you can ignore curves or bypasses of traces when calculating transmission speed due to the transmission line theory. Only the distance between origin and destination is important when calculation time needed for the signal to arrive. Is that correct?
You can make a wireless micro-bridge between GPU/CPU/Microchips shielded from outside interference and it can act as a BUS for interoperability and comms; eg. cryptography/timebased as the chips would "know" the distance to each other and can then encrypt the data.
The idea that squiggly routing of connection wires would impact transfer speed, isn't that in contrary to the point made in the latest Veritasium video, The Big Misconception About Electricity?
It's still a distance to cross. If the electrical signal needed to pass the long way around the case and traveled a meter to do so it would still matter that the component was 10 cm away.
The signal speed is the speed of the electric field, which is also around the speed of light. And the field also "travels" along the traces. The electrons themselves are still pretty slow.
Veritasium's claim is technically true, but in practice not really relevant since the amplitude of the signal induced by the fields is much, much lower than the actual signal. In high speed circuits the signal path and it's return path (ie ground) are routed very close together to minimize the spatial extent of the EM fields. In the Veritasium video they were "routed" 1 ly apart...
In the Veritasium video he's really talking about the ability of the wires in the circuit to act like antennas (which I really wish he had explained better, or at all). By treating the wires as antennas you can design them to control how much power is transmitted as radiation rather than via conduction. If you don't actually want an antenna then you just design your wires to suppress that effect as much as possible. The effect gets more prominent as the wavelength of the signal on the wire gets closer to the size of the wire, so high frequency (short wavelength) circuits are basically always limited by your ability to keep your wires from acting like antennas. That's one of the reasons why CPUs can't just keep getting faster and faster.
The leading edge may get there, but transistor logic doesn't fire on the faintly discernible leading edge, it fires on a voltage threshold. If the transistor needs to see 1V to switch on, it doesn't matter how fast the first couple microvolts get there.
We are reaching performance levels where latency is limited by distance. As signals want to travel at the speed of light - they get held back by the fields which will be generated in a system. The trend will continue
One of the disadvantages of a SoC is, that there is more die size required than for the individual components. So in big designs, the yields become less economically viable.
integration progresses in many directions, but it gets a bit complicated in MCM, should we call them System On Package? or the lack of GPU makes them not count? but then APU are clearly SoC as these are monolithic and technically all of them are able to run without chipset at all but I don't think there are any aftermarket chipletless motherboards for Intel there are mobile SoC that have the PCH (chipset) die on the "CPU PCB" but not all of them do that
The market is for computer users so enthusiasts and such aren't really a part of that. Moving computers to SoC means that a number of things are given up. Expandability, particularly RAM expansion for example, is gone. You need to buy the computer as you want it (and think you may want it before end of life of it) because you won't be easily doing things like adding more RAM to it. You would have to replace the whole SoC if you wanted to go from, say 8GB, to 16GB. So if you buy a machine that, after you get it, you find that you really need more memory, you have to buy an entire new computer and figure out something to do with your older computer. Hopefully you can figure that out within the return policy of the thing and can send it back and get the one you want.
A! that was a Pixel 6, eh? nice. So half of the PC-systems today are SoCs? with integrated graphics and integrated mem-controller and integrated north-bridge?
In addition to signal speed limitationa, voltage loss over say 10x to 10,000X greater distance with the 1960s motherboard technology is not insignificant. + increased radiation emmissions. It seems there should be a huge market for highly upgradable SOS's. True computer modding "enthusiasts" would of course need to learn to solder or pop down to the repair shop to have our new componets hooked up. Seems motherboards and all the rediculas connector cables and cable managment nonsence needs to become irrelivent relics! We would love an episode on the SIC modding going on at the institution there.
So... How does this work vs Veritasium's light-second light bulb circuit which lights almost instantly because of the energy field... If the light turns out almost immediately despite very long wires, why is PCB track length an issue that needs to be consider in relation to "time taken for electrical signal to reach the other end" based on what Veritasium was saying it seems like track length wouldn't be important and only the position of the chip relative to the CPU would matter (it wouldn't matter is the tracks were different lengths or went all around the houses to reach their destination. I understand there might be issues with attenuation and potentially signals resonating within that wires, but here if seems you are focusing on the speed of light and length of tracks to determine arrival of information (which makes sense to me), but it seems to go against this whole concept that Veritasium has got people talking about.
Most flat screen televisions today have SOC (custom ASCI) Also when you combine more silicon on one chip you reduce less power and therefore less heat dissipation because more transistors are decreasing in size on the die.
Creating it in a hardware description language would at least ensure the intended design is free of backdoors. If you used it to program an FPGA, you'd probably be alright, though it's still possible for the software used to create the bitstream to maliciously insert functionality. If you used an open-source toolchain as well, you'd eliminate that avenue. Anything beyond that, like a fixed-function block in the FPGA itself, is getting into "more trouble than it's worth" territory for the malicious actor.
oh... I thought that were a "photoshop chip", a "crysis chip" to address "can it run crysis", a "Epeen chip" to run cpu-z exclusively, and an nvidia chip so that linus (both of them) can give nvidia the finger. Not exactly the exact understanding, but close enough. I do CS too!
if you would have dc power rails, not ac, you would not have to have complicated power systems in the box, also if you had memory in the chip, just one chip
Thousands of pounds of discrete giant PLCC's looked a lot cooler & were more rewarding to program. The hardware has been replaced by language semantics.
This is why, with my new M1 Mac, I paid for AppleCare, for the first time ever (after being a Mac user for 35 years): I could always afford DIY repairs, but I can't easily afford replacements. Insurance has become necessary.
I’m mostly impressed the camera guy casually knew what week number it is
Pure coincidence from a different project I work on :) -Sean
Could have been bluffing, after all who would know (apart from subscribers to Computerphile - oh wait).
Still have no clue why the rest of the world hasn’t switched to the superior Swedish weekday-weeknumbert way of referring to dates
The camera man stays knowing
@@axelnils does the first week of a new year always start on the same day of the week? Like a Sunday or can it be different set by Jan 1? I like the concept.
Very good video. As a retired HDD firmware engineer, I can tell you that storage devices had systems on a chip (multicore). In fact everything on an hdd has gotten smaller, use less power, and faster over the decades.
I'm sure everyone would like to join me in wishing the Acorn A3010 SoC a very happy birthday 🎂
Al
Mazel tov
"I am not a chip designer" - never thought I'd hear that disclaimer, hooray for engineers :)
IAACD (I am a chip designer) and this was a really good outside view of why SoC so good job all.
Only addition I'd make is that it used to take a couple of engineers about 2 months from inception to prototype to design a new computer (what I did before SoC), a few thousand quid for a handful of prototypes and you could test and fix it in "real life" with a soldering iron until it was working and start selling the final product a few months later. With SoC it takes more than 2 months just to very precisely specify what it does and probably more like a team of 12 engineers 2 years and a few hundred thousand quid to getting a chip that has a chance of working and there's no bodging it with a soldering iron when it doesn't so add another pile of cash and 6 months+ if you got it wrong, which is why most technological changes are now evolutions not revolutions.
Not to mention geting capacity in a sufficiently advanced foundry with the right production tech, and apparently you can't just go to a competitor , at keast nit uf you want the latest and greatest tech
@@bjarnenilsson80 do u know how to code or program
Thank you Computerphile and Steve for a lovely video, once again ! Probably my favorite channel on UA-cam.
As far as signaling, all of this is true, even without taking capacitance and em interference into account, which are far bigger problems for high frequency signals, and are much easier to manage / plan for in an SOC.
Actually I think you can go back a bit further, because in the early/mid 1980s there already were quite a few MCUs that combined CPU with timer(s), A/D converter, UART, etc.
I agree with you as most tech channels resort to attention getting behavior to seek views
SoCs are distinguished from microcontrollers by being at the heart of a PC or PC-like system. Microcontrollers typically integrate volatile and non-volatile memory. You could argue about terminology all day, but the intent is different between SoCs and MCUs.
@@tackline No, I am saying that a custom ASCI'S are the same thing as a SOC ,as they contain all the necessary building block circuits that that are needed for a main function of a device such as test equipment, big screen TV's , network appliance, network routers, and special medical devices
@@johnsenchak1428 ASIC's were the precursors to SOC. Like most technology, everything builds on the previous generation. PAL's were the precursors to ASIC's, and are still in use today. I remember, back in the 80's, using ROM chips as cheap PAL's, once you finalized your design. And using EPROM's while in development.
Yes, the 80188 and 80186 from 1982 were microcontroller versions of the 8088 and 8086. They had on-chip timers, DMA controllers, interrupt controller, clock generator, and wait state generator. No UARTs or A/D converters, though.
There were plenty of 8051 variants with UARTs and A/D converters and lots of other goodies.
I already know what a SoC is, but I watch the video anyway because I know I'll still learn a thing or two from Dr. Steve.
Thanks! Very interesting discussion of the differences and pros and cons. The 3GHz track length thing was fascinating! Hadn't appreciated at that speed we were getting into such considerations.
What we had before a microcontroller? A suitcase full of TTL circuits.
You could fit it in a single suitcase?
@@narobii9815 depends on how big a suit David Byrne can wear
0:52 Disassembly was really easy back in those days.
More disassembly = Less profit
It's crazy that in one clock cycle light can only travel 10cm. These are some incredible machines
It's even worse, because signals travel slower through metal than light does in a vacuum.
+ more radiation polution. + lo voltage DC voltage losses over huge 1970s era cable connector usage.
Nowadays if you want to customize the entire thing down to the cpu, you can always plonk down an fpga, use a soft ARM or RISC-V core and build the rest of your custom circuity into the firmware without ever needing to design and produce a custom physical chip.
It's too expensive for most companies to do this. Hence dedicated chip vendors perform this and sell the customized chips to the client business.
@@anujmchitale It's worked wonders for the startup I've been a part of though. No custom chips or socs that limit choices of what other hardware you can attach. Just a big fpga that does all the custom signal handling and you're set.
At the cost of lower clock speed and higher power than an equivalent ASIC though
@@Lttlemoi oh nice.
yeah, but then if you want to optimize the costs, speed and power consumption you have to convert it to ASIC. It's a trade off of course.
I think you should have mentioned the most important leap in modern SoC design: incorporating the RAM.
The RAM on the new M1 Macs isn't really "on the chip" - it's still a soldered-on component. It is in fact possible (albeit via extremely specialized work) to upgrade the memory on an M1 device.
@@jpdemer5 True, but the fact that it is not in the same die doesn't change the characteristics in my opinon - it is still simpler, faster, less flexible, and non-upgradeable (for practical purposes). What most people consider a "chip" is the package anyway, although I agree technically speaking it is "System-in-a-Package".
@@leomuzzitube that's the thing: the RAM is on a separate package, which is then soldered on top of the CPU. They are two separate packages. That said, most SoCs do have a decent bit of internal SRAM, which is great from a bootstrapping perspective (DRAM tends to need to be "trained"), as well as a security perspective (raises the barrier to entry for attackers).
The RAM is on the same chip but on a separate die within the chip, if that makes sense. It's what we would call separate chips back in the day but now it's packaged together in an enclosed shell which looks like a classical chip at first glance. We now call an integrated system on the same silicon wafer - a die, and the package which may contain several dice - a chip.
@@elimalinsky7069 this isn't correct. They have a BGA DRAM chip which then gets soldered on top of the CPU
Dr. Steve Bagley, what a lovely guy. Looks like he's living the dream with all the retro devices in his office!
ARM250 based Archimedes ( A3010, A3020, A4000 ) can be very easily overclocked up to 24 Mhz thanks to faster DRAMS.
It's all detailed on the Stardot forum.
I'm sure I heard somewhere that the latest Raspberry Pi is originally a SoC for a set top box.
That's entirely possible, a lot of that sort devices (which are effectively computers in their own right) make use of ARM socs like the one you'll find on a raspberry Pi. Edit: did a bit of digging, the soc in the original pi (v1) was indeed also set in set top boxes like the first gen Roku. The soc in the latest generation (pi 4b) is more or less made for the pi but very much resembles a different model Broadcom (the SoC manufacturer) also makes which is once again intended for set top box usage.
The first one certainly was.
@Nick Williams do u know how to code or program
This video was absolutely amazing. Please, do more like these. Similar topic, maybe about other HW…
There is also a tremendous speed and power advantage to routing high fan out peripherals on chip vs. off chip. Anytime you have to pass a signal through a pad driver and externally, the driver power goes up and the speed of the line goes down. One of the great drivers of SOCs were the making of cells out of standard design blocks such as CPUs. To do this, silicon designers would separate the cells (CPUs, peripherals, etc) from their pad ring drivers. Thus a single cell design could be dropped into a SOC using multiple cells internally, or drop into a pad ring to make a stand-alone design.
I love that CTM644 in the background, on top of the MP-3 TV tuner. Really nice!
One more advantage of a SoC, is the power efficiency.
Transfering data uses more power than the actual calculations made in some discrete systems.
Having the data close, reduces that energy requirement.
Also latency is reduced.
This also translates to cost savings for cooling solutions.
And weight savings for mobile devices.
It beggars why we haven’t already moved more discreet systems onto SoCs and left everything else for where you really need the extensibility
@@Freshbott2 There are multiple factors. It's harder to plan for the demand of a more specific piece of circuitry. And more complex SoC's do use more die space, which reduces yields in the production. A modern Pc is close to a SoC. But since it's rather complex, it's more economical to leave some aspects modular. So at the moment, it's either a hybrid approach, a smaller SoC or a very big production run of a company that can dictate what the market demands. For example apples M1 chips are such a case. They can afford to ignore every customer who's needs are not fully met.
@@heyarno I don’t really think that’s it though cause whether it’s a Mac or PC laptop of whatever kind it’s already not meeting the majority of people’s needs fully. The only people who are fully met are the ones an SoC wouldn’t benefit. Forget all the different encoding and enclaves and DSPs etc. just theefficiency gain from moving to true integrated graphics is what people want. Everyone wants that MacBook body.
Nice. Very comprehensive video. Lots of good info in the comments too
I also would like to add, that soc is not only about computational power and exact circuits on chip design(features and so on) but also end user devices manufacturing processes. The size of exact circuits on chip is way smaller that final chip “package”, because the package used not as a shell only but also as heat conductor and array of contact points. There also cons and pros going from this fact.
The 1984 Commodore/MOS "TED" chip was a sort of early SoC, prior to any ARM chip. Please, some credit for Commodore, MOS/CSG, etc. What insane obsession with Apple/ARM.
I didn't know about that. Seems like a 6502 version of the ARM 250.
"Somewhere here in my archive" - which is everything in a 200 degree radius behind his back.
Uh... radius is not measured in degrees. Angles are measured in degrees. Go back and repeat 5th grade.
@@morpheus6749 Field of view, you nit.
@@RagHelen LOL.. radius is not the same thing as field of view. Like I said, go back and repeat 5th grade. Maybe this time you'll learn what radius means. Maybe.
@@morpheus6749 it was sort of obvious what he meant, and while he was wrong, you could have put it more nicely. Your little moment of "superiority" there just shows how badly adjusted person you are
@@reflectedcrosssite2848 Obvious? What exactly does "a 200 degree radius" mean to you? Do tell.
Me, an ECE student, who somehow managed to pass this semester: _"hmmm, interesting..."_
Jokes apart, it's videos like this that keep me going. Thanks!
EE student here, still struggling.
studying EE is not worth it imo unless you want a job in academics or some fancy prosthetics firm. most of what i learned about engineering (digital circuits, microcontrollers, etc) i learned either on the job or DIY style. most complicated formulas from my studies i actually used was capacitor charging curves....
@@QuantumFluxable I have no choice since its either EE or CS in my college. There is no Computer engineering, and I want to learn more about hardware design ( VLSI design, embedded system).
I recall reading about an open source project years ago for a SoC that implemented an IBM PC, though with VGA rather than CGA or MDA and no FPU option.
Wow. Teraz zrozumiałem cały concept. Świetny film😇
So would you consider the ULA on the ZX81 to be a primitive version of an SoC?
ULA is generally considered part of the CPU, so no.
@@framegrace1 it's another chip, nothing to do with a CPU
@@Archimedes75009 YEah. sorry. Thought you were talking about the ALU. (ULA is ALU in my language, hence the confusion).
@@framegrace1 Ah, I see, I am French and it's the same here :-)
yes and no. I mean, guessing it isn't a complete system so no, but presumably it allowed them to create one chip that did the work of what would otherwise be several different ones. So while it wasn't a complete system presumably it had at least some of the advantages of SoC around cost/complexity saving, at the alternative expense of needing to design (the final manufacturing stage of) the ULA chip.
I love that you used Acorns to explain SoC
Given how many more transistors that, say, the Z80A of a Spectrum has than the Spectrum ULA, it's a shame that SoCs didn't come about in around 1976. Put a DRAM-compatible interface on a ROM/PROM and a relatively cheap computer could have been built with a 40-pin SoC.
In a way they'd already started down that road with the Z80A--the CPU there included the capability to refresh the DRAM in the computer, which previously had been the job of additional circuitry on the board in earlier microprocessors.
@@d2factotum do u know how to code or program
the motorola 68HC11XX chips were a lot of fun to play with, and they were used in a lot of different consumer devices.
The primary design constraint in GHz+ PCB design isn't propagation delay but the fact that those are microwave frequencies. Traces act like transmission lines, not wires, and fast edges generate very high frequency harmonics. You can wave away impedance matching at the MHz clocks speeds of 1980s/90s machines, but not now.
This must have been the kind of thing that totally makes sense in hindsight. Of course you're not going to run a bunch of traces that are unnecessary if you just cram those modules together.
It's made sense since the first integrated circuits at the tail end of the 1950s. It just took time for it to become cheap enough to put everything on a single chip.
"just" like its that easy?
I'm a simple man. I see Steve Bagley, I hit like.
00:52 That lid was ready to go 🤣
It always amazes me that the UK had so many native computer types that were their own thing and weren't exported.
Oh, that “if you remember” moment)))))) nice
The ARM250, which you found in the A3010 ran at a clock speed of 12MHz vs the ARM2's 8MHz, but while the latter only managed 4MIPS, the former ran at 7MIPS, and that extra 1MIPS of speed was, in part, down to the fact everything (aside from the memory) was so tightly integrated on the ARM250 SoC!
I can't see how that would make any difference. It's literally all four of the original design placed on a single die, without so much as bothering to rework the layouts so that they fit together better. I can believe in better compiler optimisers (MIPS is measured from a particular C benchmark) and for the same screen mode, the video will be using a smaller fraction of the total bandwidth.
@@tackline That's 7 MIPS for 12 Mhz systems vs 4.5 for 8 Mhz. Yes : 2.5 extra MIPS DOES matter.
hey @Computerphile ,I really hope that you will be discussing next time about Quadtree and its use in image comparaison or sorting spatial data (the main purpose of the invention of quadtree in 1974) ! Thanks in advance ;
10:16 - I thin you forgot to put the disclaimer up Sean! 😀
I Thought this explanation of SOC's very good indeed and most interesting as well. Many Thanks,
Any videos where you go over your collection of gear there? If not, make one!
I believe that the future Low Cost Computers for businesses and home use will all be SoC systems. It's an easy way yo reduce the cost, yet provide the functions the majority of people will want.
They already are! What I hope is console grade SoCs in Windows gaming thin-and-lights will show up soon
Apple M1 arrived last year
Great channel I have downloaded all its videos with an external UA-cam download program. Just Wikipedia for the article: Comparison of UA-cam downloaders.
Acorn could have been HUGE. I know that Arm is super successful but I wish Acorn were still around. I loved their computers, so ahead of their time.
The modern x86 processors are kinda SoC too - built in cache, built in memory controller, built in IO PCI-Express bus controller etc. Compare to even 20?25 years ago, there was a 'north bridge' interface for memory controller and 'south bridge' for IO. If RAM serves, even kind of until the Intel Core 'i' era. On certain even older earlier ones (386, 486?) even cache was external. If nowadays a gpu is built in, all you need is RAM and storage and it feeds video output directly from the chip.
Oh man, 1987! Awesome.
Not on topic, but... I wish that I still had my old commodore 64 with it's floppy disks and cassette tape storage to show my teenaged son.
How did I know he'd use an Acorn as the first example lol
They seem oddly obsessed with them there.
@@DanEllis Because the ARM250 is the 1st ever SOC. And it was created by Acorn.
I guess you can ignore curves or bypasses of traces when calculating transmission speed due to the transmission line theory. Only the distance between origin and destination is important when calculation time needed for the signal to arrive. Is that correct?
Transmission law theory?
@@the1exnay ohh sorry. Transmission line theory.
You can make a wireless micro-bridge between GPU/CPU/Microchips shielded from outside interference and it can act as a BUS for interoperability and comms; eg. cryptography/timebased as the chips would "know" the distance to each other and can then encrypt the data.
Do you have any links demonstrating this? Sounds interesting.
You could also just use wires. 🤷♂️ Wireless is slower and much harder to design. It's only really better for portability.
intel labs semes to be going the route of photonics. I heard about it in a interview with Ian Cutriss (Anand Tech) on his UA-cam channel
The idea that squiggly routing of connection wires would impact transfer speed, isn't that in contrary to the point made in the latest Veritasium video, The Big Misconception About Electricity?
It's still a distance to cross. If the electrical signal needed to pass the long way around the case and traveled a meter to do so it would still matter that the component was 10 cm away.
The signal speed is the speed of the electric field, which is also around the speed of light. And the field also "travels" along the traces. The electrons themselves are still pretty slow.
Veritasium's claim is technically true, but in practice not really relevant since the amplitude of the signal induced by the fields is much, much lower than the actual signal. In high speed circuits the signal path and it's return path (ie ground) are routed very close together to minimize the spatial extent of the EM fields. In the Veritasium video they were "routed" 1 ly apart...
In the Veritasium video he's really talking about the ability of the wires in the circuit to act like antennas (which I really wish he had explained better, or at all). By treating the wires as antennas you can design them to control how much power is transmitted as radiation rather than via conduction. If you don't actually want an antenna then you just design your wires to suppress that effect as much as possible.
The effect gets more prominent as the wavelength of the signal on the wire gets closer to the size of the wire, so high frequency (short wavelength) circuits are basically always limited by your ability to keep your wires from acting like antennas. That's one of the reasons why CPUs can't just keep getting faster and faster.
The leading edge may get there, but transistor logic doesn't fire on the faintly discernible leading edge, it fires on a voltage threshold. If the transistor needs to see 1V to switch on, it doesn't matter how fast the first couple microvolts get there.
Awesome video:)
Why are they talking about Shadow of Chenobyl?
No mention of key terms from back in the day: northbridge and southbridge?
We are reaching performance levels where latency is limited by distance. As signals want to travel at the speed of light - they get held back by the fields which will be generated in a system.
The trend will continue
So ... essentially...
SoC is just a fancier more cable Microcontroller?
Like to hear some thoughts on this.
One of the disadvantages of a SoC is, that there is more die size required than for the individual components.
So in big designs, the yields become less economically viable.
That’s why Apple charges so much.
I have an "archive" like Steve's (Dr. Bagley) too: a big pile of old computer crap that I can't bring myself to throw out!
Has anyone noticed how poetic it is , an iMac and acron sitting on the same table.
Fruit and nut ...
Nice, even for beginners. Thanks!
The case of Diet Coke on the desk is just... _Chef's kiss_
This episode of Computerphile brought to you by...
Superb!
And now we are moving "back" to bridges with chiplets...
3:44 It almost looks like the serial controller has USB logo on it :)
integration progresses in many directions, but it gets a bit complicated in MCM, should we call them System On Package? or the lack of GPU makes them not count? but then APU are clearly SoC as these are monolithic
and technically all of them are able to run without chipset at all but I don't think there are any aftermarket chipletless motherboards
for Intel there are mobile SoC that have the PCH (chipset) die on the "CPU PCB" but not all of them do that
Really interesting :)
South bridge, North bridge. I feel like these are terms you aren't using, but should be with these systems.
Nice video, keep it up , thank you :)
Intel or AMD is SoC? They have many controller bulild in "CPU".
Nice video
I'm going to start calling my pile of dusty electronics "the archive" also. Might improve partner approval factor.
The market is for computer users so enthusiasts and such aren't really a part of that. Moving computers to SoC means that a number of things are given up. Expandability, particularly RAM expansion for example, is gone. You need to buy the computer as you want it (and think you may want it before end of life of it) because you won't be easily doing things like adding more RAM to it. You would have to replace the whole SoC if you wanted to go from, say 8GB, to 16GB. So if you buy a machine that, after you get it, you find that you really need more memory, you have to buy an entire new computer and figure out something to do with your older computer. Hopefully you can figure that out within the return policy of the thing and can send it back and get the one you want.
A! that was a Pixel 6, eh? nice.
So half of the PC-systems today are SoCs? with integrated graphics and integrated mem-controller and integrated north-bridge?
Great topic
I'm still here
I like the fact that the coke container just hang around in the back
Awesome video also I am making my own coding security company while in 11th grade lol
In addition to signal speed limitationa, voltage loss over say 10x to 10,000X greater distance with the 1960s motherboard technology is not insignificant. + increased radiation emmissions. It seems there should be a huge market for highly upgradable SOS's.
True computer modding "enthusiasts" would of course need to learn to solder or pop down to the repair shop to have our new componets hooked up.
Seems motherboards and all the rediculas connector cables and cable managment nonsence needs to become irrelivent relics!
We would love an episode on the SIC modding going on at the institution there.
My Apple Silicon Macbook Pro is the best computer I've ever owned.
for a looong time we had chipsets (and still do for desktop motherboards).
So... How does this work vs Veritasium's light-second light bulb circuit which lights almost instantly because of the energy field... If the light turns out almost immediately despite very long wires, why is PCB track length an issue that needs to be consider in relation to "time taken for electrical signal to reach the other end" based on what Veritasium was saying it seems like track length wouldn't be important and only the position of the chip relative to the CPU would matter (it wouldn't matter is the tracks were different lengths or went all around the houses to reach their destination.
I understand there might be issues with attenuation and potentially signals resonating within that wires, but here if seems you are focusing on the speed of light and length of tracks to determine arrival of information (which makes sense to me), but it seems to go against this whole concept that Veritasium has got people talking about.
At gigahertz clock speeds, it takes a long time for signals to get anywhere.
It's not just cost reduction, there is also the electrical engineering principle that the lower the component count, the higher the reliability.
That principle applies to any engineering field
Most flat screen televisions today have SOC (custom ASCI) Also when you combine more silicon on one chip you reduce less power and therefore less heat dissipation because more transistors are decreasing in size on the die.
Thought I spotted an A3010 in the thumbnail... slightly disappointed it's not an early production model with a mini-motherboard...
From memory, the mezzanine board. Always loved that name.
This stuff is so complicated by now that everything is both a system on a chip and a system not on a chip
Is there a way to audit SoC to prevent big tech from adding subsystem or backdoor?
Creating it in a hardware description language would at least ensure the intended design is free of backdoors. If you used it to program an FPGA, you'd probably be alright, though it's still possible for the software used to create the bitstream to maliciously insert functionality. If you used an open-source toolchain as well, you'd eliminate that avenue. Anything beyond that, like a fixed-function block in the FPGA itself, is getting into "more trouble than it's worth" territory for the malicious actor.
oh... I thought that were a "photoshop chip", a "crysis chip" to address "can it run crysis", a "Epeen chip" to run cpu-z exclusively, and an nvidia chip so that linus (both of them) can give nvidia the finger.
Not exactly the exact understanding, but close enough. I do CS too!
Was a bit repetetive but still quite good
Box of Diet Coke + bunch of old computers = great office
0:51 Amiga :)
Who is the man asking questions?
He's got a whole case of diet Coke in the back. Curious.
Disadvantage of SoC: Bigger chip means worse yields, which makes cheap more expensive.
if you would have dc power rails, not ac, you would not have to have complicated power systems in the box, also if you had memory in the chip, just one chip
and usb-over-pcie, one port standard
Thousands of pounds of discrete giant PLCC's looked a lot cooler & were more rewarding to program. The hardware has been replaced by language semantics.
That was an incredibly long-winded way to say "Easier to design, cheaper to manufacture, physically smaller, and consumes less power."
Hey Ryan
Damn Ryan got a lot of deletion to do
It does concern me a little that the chip could be more easily damaged, and there's no chance of recovering anything.
Or that it's less upgradable.
This is why, with my new M1 Mac, I paid for AppleCare, for the first time ever (after being a Mac user for 35 years): I could always afford DIY repairs, but I can't easily afford replacements. Insurance has become necessary.
How do you think Apple became a trillion-dollar company?