Google's Open Source Hardware Dreams
Вставка
- Опубліковано 9 лис 2022
- Check out the Build Custom Silicon with Google website here: developers.google.com/silicon
Check out Matt Venn's channel @ZeroToASICcourse and his new project Tiny Tapeout: tinytapeout.com
Links:
- The Asianometry Newsletter: asianometry.com
- Patreon: / asianometry
- Twitter: / asianometry
A few fun little facts:
NCSU publishes their free pdk for 45, 15, and 3nm process nodes.
It's how I learned VLSI myself, and Im at the big green company these days.
And, the 130nm process node is optimal for satellite systems. Radiation flips bits less often when you're on larger transistors.
Hi, how much did it take from you to learn VLSI? did you start learning from zero or you were in EE/CE college?
I also fantasize about open-source projects for CPUs, GPUs, FPGAs, and a few ASICs with at least one project in each category having a humongous contributor base so that people get good hardware without any IP fee (only manufacturing cost). There will be huge libraries so that anybody can make a custom chip for their application or simply pick the leading FPGA to program it to their needs.
There are some big open-source projects like Linux and applications for Linux, ROS, OpenCV, some CAD software, etc. And Linux made Microsoft afraid when they were dependent on selling windows to earn money.
Better start learning and tinkering now while we're still in the early days! :)
I'll be trying.........
funny you mention Microsoft's early fear of Linux. Now they're a significant player in that space, with Windows Subsystem for Linux feature in Win10/11 allowing Linux distros to run inside windows
I think this is totally possible. True, the cost of making a small number of chips is going to be really expensive, but if there's enough people that want in, and the cost of small runs comes down a bit, this seems within reach in the next few years. The open source community has the design talent.
The Risk-V CPU is already open source, but good luck finding any manufacturers / operating systems. The truth is that without mass adoption, it will never be a thing.
@@firstNamelastName-ho6lv Yeah simulation may be nice and a broad knowledge in the society how to build chips is (as all education) something good, but producing a REAL product goes always hand in hand with a RISC[1], sorry for the pun, a risk. Reminds of all those people who want to be a "superstar". Yeah, they are real ... but not enough "free slots" that let you become one. You know what I mean? It isn't even the worst strategy: Instead of paying decent wages or to take care of all the tedious education of your future employees, why not make them believe they ALL can get to be a superstar and then pick the best and throw the others in the trash. So THAT reminds me of something;) Hehehe.
BTW of course the provision of free development opportunities and access to research for even laypersons is never wrong. However, giving someone false hope (like any hype) is antisocial and despicable. At least a very expensive hobby ... but in my opinion more useful than botching sports cars or shooting automobiles into orbit.:)
[1] Did you actually wrote "Risk-V CPU" up there (it is RISC-V!)? I noticed it only after writing down my joke. Or did you inspire me? Anyway: Hehehe, well done!:)
Great video Jon! Thanks for the mention. Thought you did a good job on framing the open source tools and the aims.
Disappointed you didn't provide a diagram...
Thanks for your great work, Matthew.
8:39 Thanks for your comment on our work!
It's nice that companies are creating these tools, otherwise it's nearly impossible to get into microelectronics design on your own.
7:56 - when would such a thing happen in *hardware*
I do it for fun on paper 🤡
This is so cool! The first application I thought of was for one of my other interests: astronomy. The issue that large, ground-based telescopes face is that the atmosphere "smudges" the image they create with every bit of turbulence in the air. Nowadays the big fancy observatories get around this by using "adaptive optics", which are optical surfaces that can change shape thousands of times per second to perfectly counteract the distortion caused by turbulence. The whole system uses machine vision, and the calculations have to be done super fast (again, at least thousands of times per second), so (I think) it takes dedicated hardware to do it. But that's out of the price range for a lot of observatory projects. I remember reading a research paper that looked at consumer grade GPUs vs FPGAs to tackle the problem in a cheaper way, and I think they ended up performing kind of similarly. But making a cheap(er) ASIC for that exact task would be incredible!
Who knows if that's still how the research stands, that was a few years ago that I read that paper, and I don't know when it was published. So it could be that modern GPUs are so performant that they are the best option. Who knows, but that's just what I thought of.
How about doing lucky imaging in real time? Would that be possible/interesting?
@@alanparker3130 That's definitely something I've read about them trying, but the issue is it only works with very short exposure times, which doesn't work for dim objects. Plus if 1 in 10 frames is actually OK, then 90% of the time observing was fruitless. Also, when telescopes get really big, different parts of the image are under different patches of warm or cool air, so the image isn't uniformly distorted. So that means you have to de-warp some parts of the image but not others, which is possible but it adds steps to data processing. So I think there's enough reason to go with adaptive optics that that's the main solution they go for.
This might well offer you the chance to shift a well designed FPGA implementation into a dedicated IC. But why would you? It's a slow process node, and once you have made your chip, that's it. Warts and all, all design errors baked in for free and for ever. See that 'RTL' bit in the diagram? Register Transfer level - that's your hardware description - exactly what's needed for an FPGA. I can see that for a dedicated, simple task, where the consequences of error are low, it's cool. But if you can write HDL first pass with no hidden errors, then there's an entire industry wants your number. The principal reason those 'professional' EDA tools are so expensive is because they are intended to prevent or trap (or at least make sure you find) those hidden bombs. I expect most of you are programmers, and know that states are 1 or 0? In an HDL, you typically have 9 states, and you need to be sure what happens if your signal is in any one of them - because hardware - one day, it will..
I feel a bit like the guy asking what use is a computer? It's definitely a cool idea. And don't get me wrong - I'm going to have a play once time permits - but as an FPGA dev, I don't quite see what it offers you besides being cool. Sure, the big FPGAs are viciously expensive, and the tools are 'interesting' to acquire (although for the home user, they are free, so..) and a pig to learn, but I don't see this being any easier, and there are cheap FPGA boards out there now..
@@timwatson682 What do you mean by "9 states"?
I’ve done ASIC design for years. There is 0 and 1. The difficulty of an error free high performance design is that there is an extreme amount of parallelism and pipelining that make verification of rare corner cases hard to enumerate and/or encounter.
I definitely remember hearing about this from one of my astronomy professors. Hopefully this will become reality in the near future
How timely - I just literally started the NAND to Tetris course on Coursera yesterday night specifically because I'm interested in getting into the hardware stuff.
Well that's one way to raise talent in a hotly contested field where companies fight over a limited pool of chip designers
That neural network chip was done a part of the spin out of the startup company Isocline from UofM, which eventually rebranded itself to Mythic Semiconductor. Unfortunately, they just went out of business this week, as they ran out of money before getting substantial revenue.
The Mythic chip is (was) amazing technology, a white paper describing its design was on the website, multi-level-cell memory . We had just recieved a dev board when we heard they ceased trading.
How much great technology never reaches the market because of funding issues.
Then you hear about the hundreds of $million spent of shitty things like NFT's etc and want to cry.
Imagine the technological progress we could have if more people valued science and technology.
I imagine the reactions I would've gotten if I uttered the words "180nm open source architecture" in some of those meetings back in the day.
I'd have a suite in Bellevue, and a nice white shirt with very long sleeves.
2122
Eh, I feel like unless you had those before, or had proof it would work, it probably wouldn't glow up your career _that_ much.
Finally, now Gentoo users can compile their CPUs
Someone could do computer preservation with this.
All patents from that time frame should have expired, making it legal for people to design new hardware compatible with pre-2000's computing where existing hardware is getting more and more rare. A community C64 or even Win 98 / DOS compatible SoC's would be very welcome to the retro computing niche.
I'm actually interested in that.
Holy.
I haven't had so much inner giddiness about a project in a long while. It made me smile.
Now the question is, how low can you go to make a "completely custom thing"?
What can you do with 130nm? All sorts of stuff, including stuff you might never think of. I work for a major semiconductor manufacturer on power converters. Most of our state of the art processes for analog and power devices are around there. That's a great spot for high performance CMOS analog. 130nm might not be good enough for cutting edge digital, but you can still build amazing digital on that. And possibly quite alot of analog circuitry, depending on how the process is tuned.
Remembering google’s track record on suddenly discontinuing projects I wouldn’t have high hopes regarding this one’s longevity
I thought about that. I figured even if Google pulled the plug, the tools and the code repos will still be around.
Google's open source project tend to last longer and if Google suddenly were to kill it, someone gonna get the torch
As the others have said, open source is a bit harder to kill by dropping the project than their in-house apps and systems which require massive server farms.
Lol it's literally text files on git.. how do they discontinue it
@@Asianometry Also, Google likes to kill small, niche user-facing projects that have limited long-term strategic value. When it comes to things designed to change the entire ecosystem (e.g. make their AI accelerators cheaper in this case), then I think they tend to commit a bit more. Their culture encourages tinkering, so obviously you see a lot more stuff that fails, but I don't think this one is necessarily in that category.
BTW, any chance of putting the links referenced in the video into the description, so they're more accessible?
Wait a minute. So…does that mean that I, just your average Joe, can design my own silicon, and send it to , and they mail me…my ASIC? Or do I get it wrong?
Because if so, that would be pretty neat. I could finally design my own CPU. Which sucks in every way, sure, but it's mine!
That would be cool indeed. I suspect it's still incredibly complex and expensive and I wouldn't be surprised if there's minimum quantities and so on, but it's a step in the right direction.
A recruiter came to me a couple days ago about that! Google is looking for devs with microelectronics background to develop it's CAD tool.
Honestly, I've felt for quite some time that with some senior legacy-nodes coupled with some slick coding quite a few modern day things could take place (especially as some people start using dumb phones instead of smart ones).
I hadn't assumed I was the first or foremost by a long shot, but it's nice to see it actually take place. 👍🏾
Another great video. 👌🏾
I've been waiting for this for so long. If you hadn't told me, I'd have missed it.
Many thanks.
Jon, this video reminds me that I would enjoy a video on the historical development of SOI in comparison with the bulk stuff or High-K metal gate.
That would be interesting for sure.
Excellent video, oh, FYI, GDS II is pronounced GDS "two", it's the second revision of GDS (Graphic Design System) from Calma.
I really enjoy Google's strategy of "Help people so they can help everyone, as well as ourselves". We've seen it with VP8/9 and here. Quite honourable of them!
Very nice to see papers from the University I got my BEng in Electrical Engineering (Federal University of Rio Grande do Sul - Brazil) being mentioned in your videos.
Its a natural trend Ive seen from valves in chassis, with components wired to strip terminals, to through hole 0.1" pitch, surface mount ,online fully assembled pcb's and now a pathway to the golden land , mostly silicon. For hobbyists I think the arduino is so versatile this wouldn't come close to an M7 teensy. But for pure asic design, small run custom , time critical .Interesting.
It turns out Jon is not just a researcher, but a tech insider helping to push the envelope ;0)
One of the good things for hobbyists about the larger NM size, is lower rejection rates. 7 and 9nm projects (from what I understand) have really high rejection rates due to required precision. With less precision comes less mechanical error (one would think)
"This is so Awesome!"
- PRC & RF
Great content as always, thank you!
Very interesting project! I look forward to google totally bungling it and cancelling this excellent idea in 2-3 years
Thank you... it's great video. This will open a new area of developments, specially for universities and researchers.
I wish I have a longer lifespan just to learn about these things...
One thing that the creators must be careful about is the size and complexity.. I think today nobody has a complete knowledge about the internals of linux anymore (including, I believe, Linus and Kroah-Hartmann), because it got so huge and complicated. Linus himself said it is "bloated". Many people believe that linux would have the same capabilities and speed with a much smaller and simpler codebase.
EZ, implement a minimalist kernel that scans the hardware and recompiles another kernel for actual use.
This way the running system remains lean.
It's a common problem for all of humanity. So many things should be limited to what a single person can completely remember and understand, one actual human being & an understudy. It's arbitrary, but it forces you to get rid of what you don't need & keep things manageable.
How can you expect someone to know & follow the law when there isn't one person on earth who can understand or remember it all?
sahhaf1234
Longer lifespan? How much longer do you plan on living?
@@902384902384 but hard problems require hard solutions
Don't get me wrong, i too am a fan of elegant minimalist solutions to complex problems - but sadly that's not how real world works
if you think linux is bloated imagine how bloated windows is.
man if making chips is that much easier I would definitely start a lot more projects
FPGAs can do about anything a custom designed chip would do otherwise.
Open hardware is more about that Nvidia crap, where people can't write and update drivers for their operating system (Linux, BSD....)
Starting an asic run is millions per attempt.
And it will take more than one attempt.
@@hinz1 no. Even with the video's example - you can't program a FPGA to work across variable voltages, only high and low (1 and 0), so it would be categorically impossible to build 06:28 design on it. That is exactly the point.
Also, for researchers to be able to share their papers with no NDAs is huge - most cutting edge research on this field is conducted behind closed doors, even if for tests and trials they build mock-ups with FPGAs, the tooling and processes are wrapped in a thousand layers of bureaucracy. Honestly, what I see coming from this initiative is not much beyond what google wants - in like 4 or 5 years there will be a lot of recent graduates who already got their hands dirty, and so are better prepared to be hired. A friend of mine designed a couple of photonic chips for his PhD, and everything took so long, and he had to put in so much effort to fight for the grants necessary.... If something like this makes it 10% faster or 10% cheaper, it is already huge.
@@prgnify hmmmm why not these people share these tools on warez forums , am sure people will reverse engineer and crack them and will be in public domain . sometime piracy is good
@@Xerox482 the issue than become that if you MAKE anything using those proprietary technologies, they are entitled to it.
Same reason why open source programmers star as far away from leaked source code as possible, so there can never be any claim of infringement
Very very cool you can talk to the actual people!
Wow! What a great news! Thanks dude.
Informative video. PDKs are super secret. You can’t get one for modern processes without NDAs with the fab. Quality open PDKs should be a win for everyone.
This is kinda cool, at uni we use tsmc65 and the kind of NDAs we had to sign to do anything was quite intense. In short taking a screenshot of the software for your notes is already borderline.
I just hope that the opensource layout and simulation tools can catch up to the overpriced closed-source industry standards (looking at you Cadence and Synopsys).
I don't even know what I had to sign. If I refused, I would lose my job.
@ similar, sign or drop the class for us
Few channels who gives future idea, you are one of them.
love the Pulp Fiction shout out!
3:00 "good enough"
This is a key concept that will rock the eletronics industry
As Concorde showed, not every advance is economically justified of viable.
Thanks for sharing
Your videos are just the best! I love the shoutout to that in-memory compute project - very impressive.
Could all the little ICs everything needs like power mosfets, digital to analog converters, pwm controllers have open source equivalents? That could be huge for right to repair.
if you're talking about components, there are plenty of cheap jellybean third-party replacements parts so there is no practical need besides as modules integrated inside a larger die.
Though for right to repair, the manufacturer should have that in mind during the PCB design instead of using unusual pinouts/packages or component characteristics.
Finally, I was looking for this comment. Although the previous comment says that rejected parts that are relabeled work "fine" the thing is lifespan. And then there's propietary chips where you won't find details or datasheets... But can be circumvented by knowing what kind of function does. Like a dead Bios chip, or the PLL for clock modulation/generation. Or even remaking Legacy chips, like SID of the C64, or the TTL 74 series with higher freq range, less voltage/current drain, better fan-in/out capacity... Heck even make ALS181 but in true 8-bit form!
@@carlospulido6224 I didn't say "rejected parts that are relabeled work fine". This is your miss-interpretation. Like you said, I just think those little ICs cited by OP are often so simple that they are already retro-engineered, replicated and improved by many manufacturers, including the reputable ones.
Damn *JON* you keep digging up things that are *WAY BEYOND USEFUL* ... _this one_ will probably get me _out of stasis_
Very proud to see professor Bampi's work being cited here. Hits my heart ❤️
Wow, this is amazing, John please do more content in this field.
Google got you links?! What a shocker!
more deer inserts please
Im very excited for this. To me these processes look like google expects people to do analog designs with them.
Hey, I really appreciate the video, thank you.
I wonder this would help display manufactures. There is very little consumer facing information on the chips powering displays. Perhaps they require more compute power or have enough scale to be better served by more modern chip manufacturing.
Great vidéo. I am hoping that we will have also for analog, RF and mixed signal open toolkit where I amd others can try to play around.
I think that if let's say single board computers (SBC) could be made using this tech, at a bizarre low price and be made easily to chain together and form cluster for parallel processing, it could be a very nice application: the each size and speed order of magnitude, there are many application that could make the product commercially viable.
it's actually a map of Taipei, really cool. I can see 7xing shan in yangmingshan. Do you have a link to the whole map by any chance?
wow a video on CIM? can u also do memristors and stuff? pretty mind blowing stuff. I personally work on materials.
Stumbling upon your channel because of the UA-cam algorithim is one of the best things UA-cam has done for me.
Brazil, so proud.
Most products incorporating microcontrollers do not need 2 GHz. The EFR32 is one of my favorites and their bleeding edge version runs at 60 MHz. Imagine a microcontroller with exactly all of the I/O needed to do the job in one chip! I hope someone makes an open-source 6502, 8501, and ARM 4 compatible microcontroller IP! That would start everything else.
Getting your chips packaged or wire-bonding to a raw die is non-trivial to setup as well. How long till someone sets up "standard" bond pad layouts and packaging for you to plop your design in? This would be great for upgrading FPGA designs as even 130nm should give an easy 10x speed up.
There is a major difference between this and GCC. A GCC-compiled code was able to be run on the latest and fastest CPUs making them competitive with well-known products. Google's software can only work with ancient nodes which results in circuits that are 100 times slower than what well-known companies can achieve. It may be useful for some power electronics or very obscure applications, but not commercially competitive yet.
Well said. The performance implication is tightly coupled. Open source toolchain is evolving fast but still has a mountain to climb to match the performance of proprietary tools even for legacy processes.
Nodes? What are nodes in this context?
These tools also work for the non open processes, I believe there was at least one succes story around for GlobalFoundries 12nm
130 nm process node would be pretty neat....
Looks cool. I understand some of these words.
Ah....brings back good memories of when I was young and working at Intel. We were all trying to figure out how we could possibly support the new 130nm design process.....
talk about semiconductors in Brazil, it's a very small industry but it's a curious case
I wonder if this is enough to implement proper open hardware smartphone.
Are any of these process nodes a BCD technology? Or are they just CMOS?
I understood very little to nothing at all, but i like your politeness and style.
Great video! I've spent the last 3 months trying to get up to speed with Openlane. The learning curve is still quite steep, and there isn't a particularly clear starting point as documentation quickly goes out of date, but I'm excited to see where this goes in the future!
I wonder if they let you make thermal imaging sensors. Or does that require a higher security clearance.
Thermal imaging sensors, known as bolometers, are made using different techniques. Maybe Asianometry would like to make a video about this quite complicated and delicate chips? Anyway, as long as you make your thermal imaging device with 9Hz/fps you're good. Higher refresh rates are a bit more ...restricted, so to speak, at least in USA and such.
@@aeonikus1 so you need a different node/process to manufacture these sensors?
@@ballinlikebill8334 A completely different process. It's fairly advanced, high aspect ratio MEMS with custom deposition(a-Si or vanadium oxide) for the actual sensor array, plus another, normal CMOS readout chip bonded to it below.
I just started college after 2 gap years and I’m undecided on what I want to do. This video makes me want to do computer science
Anything computer related is solid but you’ll need good math skills up to differential equations to make it through.
I'm sure you'll do more research, but just a heads up, comp sci is programming, what you want is computer engineering (I think). Also, it's very competitive.
If you want to design chips go electrical engineering. Half of the software engineers I've worked with are EEs but you never see a hardware engineer with a CS degree
It's like the old etching your own PCB days, only different. ;^[}
I'd think the costs of actual foundry production is way beyond individuals, but small companies needing experimental or very specialized gear might be excited about this....oh, you mentioned academia ...very much so!
Maybe if you can combine different chips on a wafer, you can cut costs for smaller projects.
That's how custom PCBs are cheap to order nowadays.
Though I agree that we probably won't see anyone ordering 5 chips for a few dollars anytime soon.
@@SpecialeW for a small scale production run the cost of an entire wafer isn't the problem (especially since old nodes user smaller wafers). The cost for the masks are extremely high and most likely will not get lower since it's a lot of effort to create them. Maybe with maskless lithography one could get a handfull of coustom chips for a couple of thousand bucks, but thats still a lot for an individual.
@@minespeed2009 thanks for the clarification! 👍
I just think that if Google wants to make chips this project is great for them too. They can just apply some of the best open source design ideas or at least test them and pay nothing at all (or maybe very little). Meanwhile Intel, AMD, nV or whoever else would have to burn cash to even start experimenting with some new architecture ideas. So it's no-brainer for Google. Meanwhile for us it makes it cheap to design chips and maybe make something usable to some extent. I see it as massive win win situation. The only problem is that Google hardly makes anything on their own and their Tensor CPU in phones is badly lagging behind and uses a lot of ARM's designs, so it's not like they can modify it and even if they did, it might not be enough to close the gap with competitors.
Theres also been mosis which is open shuttle for univ and anyone really.
Shoutout to GCC. I met Stallman, cool guy.
@Asianometry, do you mind posting a link to the AV1 encoder paper and site?
I studied computer engineering but went into software because I was spoiled by the open source mentality there. Man, I would regret doing it if this would take off in the hardware space.
video on Cortical Labs
Are PDK's specific to a fabrication plant? So if a deaign was made with skywater PDK can the chip be fabricated at any fab or only skywater fab?
Typically they are not universal unless a particular fab wants to maintain compatibility between individual facilities
@@michaelharrison1093 I see, thank you!
You know how unreal engine 5 has enhanced nanite and photogrammetry? I hope that we see an invention similar to photogrammetry but it allows you to take videos of real world action/behavior/physics and them you upload the footage and the game engine software can reverse engineer the video footage into "Code" so you can implement it into the game instead of how they currently have to spend tons of effort onto creating physics effects from the ground up with code for everything in the game. (I hope this idea, concept makes any sense, I might not be explaining it properly?)
Citizen Chip Design sounds blue sky to me.
I'm eye-ing this for repro retro hardware. SID, TIA, quad POKEY, etc... even the 558 is annoying to get. Maybe some 5v tolerant cplds and fpgas in production again would be nice
One use would be TPM chips.
The picture of opening scene is the map of Taipei in Japanese occupation period?
How far a smart design could enhance the performance of 130 nm agents 16 nm for example?
The best thing since sliced cheese.
MOSIS has been around for a long time.
We’re big believers in the open source community!
Is Python really the best language to do chip design in, even if elementary? I thought we had HDLs for that...
No. No it's not...😀
Good ole symbiotic mutually beneficial Google.
what's your background @asionometry guy?
Quick piece of info - I’ve always heard GDS II pronounced “GDS Two”
was looking for this comment :)
Thumbnail says it's 10 minutes but video only has 9m 59s. UA-cam is eating your videos bro. 😁
Any thoughts of this being used for general Risc-V components in the maker space? Very exciting stuff and great video as always. 👀
There are already many open-source risc-v core designs.
Ah, yes. The silicon photonic-based neural network accelerator.
5:14
> What can we do with a 130nm process node.
I'm sure you'll continue to elaborate in the video, but... A F-TONNE!
Sure it's an old process node, but this has huge ramifications for a lot of things if utilized.
Take for example, drone video transmitters. Currently the difficulty in improving our video quality comes down to the ability to compress, digitally transmit, receive and decompress the video stream. Basically every step is difficult lol. Analogue video sidesteps this issue by using - you guessed it; analogue circuits - to perform a kind of pseudo-compression by exploiting the fact RF modulation is an inherently analogue thing.
The step up from PAL or NTSC resolution to even just a poor man's 720p resolution is massive latency, an entire compression pipeline and highly advanced RF modulation - essentially a minified TV broadcast system strapped onto the drone.
The reason why almost nobody is building digital video streaming systems for drones? It requires custom silicon dedicated to these difficult steps in the video pipeline and nobody can afford it. Current solutions are stupidly expensive. Only DJI has a decent solution - on custom silicon - and still has many issues while being very expensive. If someone can use google's services in this space, and produce a digital video streaming system that has low latency, higher than analogue TV quality, has more than a hundred meters range and costs less than a few thousand dollars, they've instantly captured almost the entire market.
AFAIK going to digital communication for this type of use case scenario can be an issue. When the signal reception becomes weak, analogue signal will produce a noisy but usable image/sound whereas digital will just cut or freeze.
It was way easier to point an antenna when TV, radio or satellite communication was analog.
@@PainterVierax Yes, you're exactly right! That's why it's so difficult! The solution to this ties into my original comment.
It's possible to compress and transmit digital video in such a way that as the data stream gets more corrupt the video looses quality and accuracy, but remains readable like analogue video would.
It quickly get into the weeds of error checking mathematics which I *do not* understand.
What I do know is these algorithms require a modern GPU running at a reasonably high load to compress(if we don't have silicon dedicated to the compression algorithm) and highly specialized and elaborate RF equipment(if we don't have custom silicon for an RF SoC).
These algorithms and RF SoCs are why custom silicon is needed. Custom silicon can do all of this easily on last decades technology. Today, building a digital video system is comparable to building a discrete microprocessor.
Your comment at 7minutes may explain Mythic Semiconductor collapsing? The paper authors from 2017 match.
Pentium 4 was produced in 180nm and 130nm. Sure, it's no 4nm! But giving me access to a 130nm fab feels very kid in candystore!
I wonder how Arm sees this development.
ARM sells IP. Their IP is proprietary, very expensive($M range to license one core) and has specific clauses about preventing open-source. They don't care.
@@Spirit532 didn't mean care about their ip being open sourced, but about being disrupted from below by armies of low cost open source chip designers.
@@willkydd They don't make chips. And ARM is still extremely rigid in the market.
I think GDSII (at 4:35) are spelled GDS 2
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
Id like to read more economic analysis of open source, yeah google and consumers benefit, but professionals... they get more tools, but have less incentive to sacrifice their time if they aren't getting paid. I think a lot of open source comes from unemployed programmers trying to build their protfolio, or professors with job security. The network effects of shared tech is great, but it would be cool if open source had a kickback mechanism, right now it feels like the gig economy.
I think it depends on the country. Here in EU more and more viable business plans based on opensource grew up during the last decades.