Google’s Chip Designing AI

Поділитися
Вставка
  • Опубліковано 3 гру 2024

КОМЕНТАРІ •

  • @Asianometry
    @Asianometry  3 роки тому +24

    Like and subscribe! And if you're interested in other tech deep dives, check out this playlist: ua-cam.com/play/PLKtxx9TnH76RiptUQ22iDGxNewdxjI6Xh.html

    • @raylopez99
      @raylopez99 3 роки тому +1

      This AI will be the death of EDA vendors like Cadence?

    • @Asianometry
      @Asianometry  3 роки тому +3

      No it won’t be. They’ll probably make their own

    • @masternobody1896
      @masternobody1896 3 роки тому +1

      @@Asianometry cant wait to get ai get smarter so they can make fast cpu that. so I can get more fps in games

    • @StefanWelker
      @StefanWelker 3 роки тому

      I think you should not announce that "you are butchering a name", either research how its pronounced or do it however you want. Announcing it is pretty offensive. Those names were pretty easy to pronounce just by reading the letters.

    • @raylopez99
      @raylopez99 3 роки тому +1

      @@StefanWelker LOL nice troll.

  • @TechTechPotato
    @TechTechPotato 3 роки тому +13

    Thanks for referring my video!

    • @TechTechPotato
      @TechTechPotato 3 роки тому +3

      Synopsis and cadence both have their own respective data as well

  • @bradsalz4084
    @bradsalz4084 3 роки тому +327

    As a integrated IC designer I suppose I should feel a little threatened by such AI technology taking my job. But like all other design tools this will just make the remaining human designers more efficient and accurate. Early in my career I experimented with using the then available "optinmization" engines built into Cadence and ADS design tools and ran into the same problem you describe here: the local minima of the error function is often not the same as the global minimima. So humans still have to figure it out. You can't just go run off for coffee and wait for the solution to pop out. You address the floorplanning (layout) problem here. But as a schematic designer it is an exponentially harder task. If you already have a architecture and process node selected, I do conceed that a machine will be able to size and place devices faster and more efficiently than any human can. The problem is you have a creative step in front of it that is still in the land of human invention, intuition, and judgement. For now, anyway. I'm sure even that will be better done by machines one day. But for now I remain gainfully employed.

    • @kuantumdot
      @kuantumdot 3 роки тому +5

      Very spot on!

    • @GoogleUser-ee8ro
      @GoogleUser-ee8ro 3 роки тому +9

      The video said that industry had been using traditional optimization methods such as annealing for a long time but TPU's DL+RL approach tackles the problem with faster speed yet similar accuracy as humans. It gives me some thought on how much of the speed gain was achieved by Google is gigantic GPU clusters vs traditional EDA's computation power, and how much is truly attributable to algorithm superiority. DL+RL is supposed to be able to "discover" IC design parameters/traits which are missed by human engineers. Based on the conclusion of Google's paper, no such conclusion is drawn. Your job is still very safe, human engineers just need more powerful computers to do their job. Another place though where I see Google's method can be more useful is chip verification. As it was explained in a previous video, we are running into a crisis of human engineer shortage to do verification work. If DL+RL can help, the productivity gain will be enormous.

    • @fukushimaisrevelation2817
      @fukushimaisrevelation2817 3 роки тому +4

      yep sorry your IC designer profession is about as obsolete as the horse and buggy, however, there will be a new chip auditing profession to try to review AI chip designs to make sure the AI skynet doesnt try to take over and/or destroy the world, good luck the rest of mankind is depending on you no pressure. Aw who am I kidding mankind is to reckless to have a human review AI chip designs to try to make sure the AI skynet doest try to take over. most likely mankind will have a different AI perform a study on the AI chip designs to audit them then the government will rubber stamp the self regulated industry AI studies on the AI chip designs in typical government cya fashion.

    • @gazzy01
      @gazzy01 3 роки тому +1

      Do I also have to feel threatened as a software engineer?
      I think in the CS field, AI has gained far more superiority than in Chip Designing.

    • @joemerino3243
      @joemerino3243 3 роки тому +19

      @@fukushimaisrevelation2817 Imagine getting your entire idea about how AI is going to work out from science fiction movies written by arts majors...

  • @nisbahmumtaz909
    @nisbahmumtaz909 3 роки тому +85

    9:26 "I can't find an explanation for how [insert ML tool works], but I CAN find how they train it"
    As an ML scientist, this is close to 90% of how it is. The area where it's gets a lot more analytical is transfer learning, and a huge chunk of reinforcement learning. While we know what goes into training the neural nets, the developed black box intuition is as close as it can get to modern magicry.

    • @PS-re4tr
      @PS-re4tr 3 роки тому +3

      Is there any hope of figuring out how the ML tools work or do we have to accept that they will remain black boxes?

    • @nisbahmumtaz909
      @nisbahmumtaz909 3 роки тому +6

      @@PS-re4tr Oh, it's absolutely not impossible at all. That's why I say in fields where finding out the nodal weights are important (transfer learning, reinforcement learning) people pay extra close attention to them and how they develop with each iteration. They're can become more and more grey boxes only based on how much resources you're willing to devote to researching them.

    • @lolgamez9171
      @lolgamez9171 2 роки тому

      Look up kernel machines and machine learning. We've cracked this black box

    • @J3R3MI6
      @J3R3MI6 2 роки тому

      Magicry is my new favorite word.

  • @windmill1965
    @windmill1965 3 роки тому +40

    Although quite a number of years ago, I was doing the floorplanning and physical layout of an analogue power chip. That is a completely different world from the digital circuits as presented in this video. I don't know how much has been automated these days, but we had to place individual transistors in the correct orientation compared to the temperature gradient on the chip. Individual interconnects had to be adjusted to the maximum amount of current which could flow in them, symmetry between two transistors or blocks was in some cases paramount, voltage drop on the supply wire or ground wire could destroy the accuracy of a block, and so on. There were so many constraints that it was difficult to convey this from the electronics designer to the physical designer. The electronics designer would often do the most crucial portions or blocks of the physical design himself.

  • @xelaxander
    @xelaxander 3 роки тому +267

    Thanks for being precise about Machine Learning. There's way too much BS floating around on that field. The reinforcement learning approach seems like another decent tool in the box to takle a very difficult problem. Honestly that's more than anyone can ask for, imho.

    • @kuantumdot
      @kuantumdot 3 роки тому +3

      Spot on

    • @LiveType
      @LiveType 3 роки тому +4

      I always laugh when people say AI overlords are closing in on being a reality. GPT and its supermassive dataset is definitely getting closer, but it's not there yet. It did set new records though. I feel like there is still a step missing somewhere as the processing power is more than sufficient. Maybe by the end of the decade there will be a new paradigm that enables it. Transformers are the new hotness right now which is what GPT is based off of. Very impressive and rather difficult to implement from my experience.
      Machine learning I find is not always the best tool for the job, but it is amazing versatile and adaptable and more often than not yields shockingly good results for not much effort. Assuming you know what you are doing.

    • @andreicozma6026
      @andreicozma6026 3 роки тому +3

      @@LiveType machine learning is really lower level than AI is. AI encompasses ML but not the other way around. AI tools and models are based off ML concepts and approaches at their core. It's a rather fuzzy line. One way to think about it is AI bring the high level systems while ML is the lower level concepts that make up that system

    • @circuitgamer7759
      @circuitgamer7759 3 роки тому +1

      @@andreicozma6026 I don't think AI has to contain ML in every case - preprogrammed rules can be used in an AI, for example. Unless I'm wrong there, but I think I'm right. If I'm wrong let me know...

    • @andreicozma6026
      @andreicozma6026 3 роки тому +2

      @@circuitgamer7759 you're actually correct, I guess to more correctly re-phrase what I said would be to say that is ML is a subset of AI. So then all of ML technically counts as being "AI", but like you said, not all of AI necessarily has to be part of ML.

  • @al8-.W
    @al8-.W 3 роки тому +10

    I am a junior machine learning engineer in a startup company. Having a tough time getting good with limited support. Still loving it. I love this field for many reasons. My very broad technical interests led me here. I could never choose whether I wanted to study fundamental physics or maths. I also discovered after graduating that I was also very interested in hardware, despite hating electronics practicals. Now I'm happy to sit on this gold mine of opportunities. Discovering the very tools for finding out the best way to do basically anything is exciting. The field is very competitive but I'm sure we could use a lot more people. The methodological fundations of machine learning is so entangled with critical thinking and quality scientific reasoning that I think societies will greatly benefit from people getting interested. I hope we get there eventually.

  • @user34274
    @user34274 3 роки тому +5

    Your channel- the content, subject matter, brevity of delivery, lack of distracting snazzy video editing, and the minimal, soothing mode of delivery is just brilliant. Love from Australia.

  • @deletechannel3776
    @deletechannel3776 3 роки тому +35

    Ah yes, a neural network trained to design chips designed a chip to train neural networks

  • @conradwiebe7919
    @conradwiebe7919 3 роки тому +9

    Big respect for shouting out TechTechPotato

  • @stevenfranks3131
    @stevenfranks3131 3 роки тому +14

    Really enjoy following along as you explore different topics going on in the tech world and beyond. Thanks!

  • @VioletPrism
    @VioletPrism 3 роки тому +85

    I feel like this will eventually be the only way forward with how complicated cpu's have become

    • @platin2148
      @platin2148 3 роки тому +4

      Which eventually will make it useless for us because we can’t write software for it as it ignored constraints. And we might have even more hardware vulnerabilities. The it’s definitely a help though.

    • @kobilica999
      @kobilica999 3 роки тому +14

      @@platin2148 It's optimization problem, so why it can't be constrained?

    • @platin2148
      @platin2148 3 роки тому +1

      @@kobilica999 I dunno if you ever looked at a heat map if any of the more complex ai’s but even making that map is incredibly difficult.

    • @trapfethen
      @trapfethen 3 роки тому +8

      @@kobilica999 Because constrained optimization is literally one of the hardest problems to solve, specifically because many constraints affect one another. Tweak this variable over here and 3 others change. Unconstrained optimization is in comparison much easier which is why AI has been deployed much more readily in areas where it's function fell squarely in unconstrained optimization territory.
      Obviously, that isn't to say that AI CAN'T be applied to constrained optimization problems, they can, have, and will be in future. You have to find a means of modelling the constraints in the reward function of the AI. This will lead to the AI over time to begin internally modelling a world model that satisfies the constraints. I make it sound simple here, but there are many gotchas. Situations that you didn't think to constrain against because it never occurred to you that that is a situation that would come about (common sense stuff again). A slight misalignment with the AI's internal world model and the constraints can lead it to erroneous results outside of the test data, etc.
      This is one of the reasons that companies like Tesla put so much effort into collecting ship loads of real world data, because it is much easier to verify AI efficacy if you cover more use cases within the field.
      Just some thoughts and rantings by a developer. Hope this helped.

    • @thelelanatorlol3978
      @thelelanatorlol3978 3 роки тому

      @@platin2148 It will ignore constraints as much as the human telling it what constraints it has to follow ignores those constraints.

  • @seth_deegan
    @seth_deegan 3 роки тому +32

    Can't wait for machine-learning-based city planning!

  • @lekhakaananta5864
    @lekhakaananta5864 3 роки тому +58

    ML isn't magic, but it does fit into what you'd expect at the start of the singularity. Now Google can design chips that used to take weeks in days. And what kind of chips did Google use ML to design? Tensor Processing Units, i.e. chips optimized for more ML. So we should expect exponential increase in hardware-level efficiency of ML techniques, until we run into some limit to the scaling.

    • @MrFaaaaaaaaaaaaaaaaa
      @MrFaaaaaaaaaaaaaaaaa 3 роки тому +17

      There are hard limits to what ML can do in this field -- ie: a perfectly organized chip will not have infinite performance.
      so there are only margins to be gained from ML here. I don't think this is significant in terms of approaching the cyber-singularity.

    • @lekhakaananta5864
      @lekhakaananta5864 3 роки тому +15

      @@MrFaaaaaaaaaaaaaaaaa I also don't think you can get the singularity by chip optimization alone, but that's not the important part. The increased ML capability can be generally applied to other fields. Just off the top of my head, if you apply them to molecular simulations and materials science, you might get a better chip production process and thus open up new spaces in chip-design.

    • @andrewferguson6901
      @andrewferguson6901 2 роки тому +8

      It's always been like this. We've been using computers to aid in chip design since it was possible. We used good steel tools to make better steel anvils etc.
      All of technology is used to accelerate development of more technology

    • @lekhakaananta5864
      @lekhakaananta5864 2 роки тому +4

      @@andrewferguson6901 Well yeah, that's the definition of technology. Increase in capability results in some of that capability being used to further increase capability in ways not possible before.
      The non-trivial thing about singularity arguments is that we're approaching some new speed of this. Which judging by exponential curves of things like GDP, is a reasonable extrapolation. It used to be that metal working took many generations of human experience to self-improve. Now chip design AI can self-improve in an iteration time of weeks.

    • @msclrhd
      @msclrhd 2 роки тому +2

      Note that this is using ML to lay out the parts on the chip. For example, the component that handles matrix or tensor multiplication. The ML engine hasn't designed the circuits of those components.

  • @TomAtkinson
    @TomAtkinson 2 роки тому

    I really like this Amadala meme. Luke's gaze that kills suddenly cutting through her playful banter. Oh and great video too bruv! ;)

  • @joe7272
    @joe7272 3 роки тому +3

    an architectural difference is the professional laid out the macoblocks in a grid like oraganized fashion. the AI did it in a rounder more organic looking pattern.

  • @evil0sheep
    @evil0sheep 2 роки тому +5

    Great video! One nit: simulated annealing is far less prone to getting stuck in local minima then gradient descent/hill climbing algorithms, at the expense of efficiency and accuracy in finding the minimum. Because of this, a common iterative optimization strategy is to use simulated annealing to get close to the global minima, then use that as a starting point for a gradient descent algorithm that finds the true global minimum.
    Also, I don't think simulated annealing is a greedy algorithm. Gradient descent algorithms may qualify as greedy algorithms but it seems really weird to me to call annealing 'greedy'

  • @negativegamma4453
    @negativegamma4453 3 роки тому +3

    this is great. Thanks. I am a ML engineer of sorts. There is a line of thinking that there is a lot of value in a model that is equal to a human. You can spin up 1000x instances whereas you can't really hire 1000x employees. By getting to something like 70% of human performance you can already see the time savings vs having things routed through a human. Also, there is "natural" performance inflation due to better hardware over time, that 70% of human performance model should be something like 20% faster, or 84% of human in year 2, then 100.8% in year 3 so on.

  • @tonysu8860
    @tonysu8860 3 роки тому +11

    Your attempt at explaining something you don't understand is commendable.
    Let me have a try based on what I know about how Google's AlphaZero machine learning works from a 30,000 foot level and then guess how it's applied to chip design.
    AlphaZero is nearly unique among AI in that the algorthm teaches itself entirely from the beginning without any human guidance, instruction or intervention. The only things the algorithm is given are the basic parameters of the game/problem and the algorithm starts with trial and error to discover basic moves/relationships, building its skill from scratch. Essential to the process which is different than many other machine learning is its use of the Monte Carlo approach, which is to create long and often very complex solutions but not file a final score for that procedure until the very end... This is computationally heavy, but it avoids solutions which might look attractive at first but lead to a less optimal result while making it possible to consider less optimal next steps but eventually arrive at a better result.
    Another aspect of neural networks your video didn't seem to clearly describe is that there is a big difference between training the algorithm and solving the actual problem.
    Training is performed by running the algorithm constantly, 24/7/365 and may require well over a year to achieve world class capability with over 93% accuracy (comparable to the best humans in the world, fully trained, experienced, and typically the best education available). It's slow and tedious, and typically involves crunching terabytes of data of known solutions (Yes, already solved).
    The algorithm can be used at any time, but the more time spent training the algorithm, the better is the algorithm's capability.
    Then, when you have a new solution, you can run that solution through the algorithm and get a result.
    In your video, you said that the AlphaZero solution was only approximately the same quality as 3 other known ways of creating the solution (of the chip floorplan). That suggests to me that the AlphaZero algorithm is probably immature. It might be only equal to one or at most two other methods, but it's my feeling that if matched against 3 other methods... Alphazero should be able to better at least one clearly if not all of them.
    I would guess that within another year, the algorithm should be able to beat every other approach to creating the best floorplan, and that's even with the possibility that chip floorplans will be vastly more complex with such things as stacked 3D layering.

  • @dekev7503
    @dekev7503 3 роки тому +7

    Floor planning is just a small step in chip design. I know this because I'm a masters student of microelectronics engineering and I'm literally taking a course in physical design this semester. There are more complex steps and Floor planning is just 5% of the topics covered in the course. From design for test, atgp, static timing analysis, DRC, etc. The way journalists describe this topic make it seem like the AI designs the chip from scratch.

  • @tykjpelk
    @tykjpelk 2 роки тому +3

    Inverse design is slowly becoming a powerful technique in my own field of integrated photonics. The idea is that by telling algorithms what we're looking for they can design extremely efficient devices. This is on the single device level, not layout. A simple example is a splitter that sends two different wavelengths down different paths or that combines them. A human would typically make a device where small differences add up over a long distance, easily 100s of microns. Not so for an inverse design algorithm. They typically produce QR codes a few microns in size that make no sense to a human, but kind of work. What lets us designers sleep at night is that a: the designs are usually impossible to fabricate reliably because they use tiny features and corners, and b: the really impressive ones perform relatively coarse tasks (TE/TM splitting, separating whole frequency bands) with lower efficiency than a human optimized, physics based design.

  • @benjones1717
    @benjones1717 3 роки тому +3

    8:21 I love that baseball pitch flying punch, we need more special moves in baseball.

  • @alexscarbro796
    @alexscarbro796 3 роки тому +7

    An excellent video.
    Differential Evolution is another good (global) optimiser that is pretty good at not getting stuck in local minima.
    That rotating wafer was beautiful BTW!

  • @AtriumComplex
    @AtriumComplex 3 роки тому +5

    Hi there, good video. Just two points of clarification. You said simulated annealing uses a objective equation based on objective factors. This suggests you are thinking objective as "neutral". Simulated annealing is actually an attempt to minimize an objective (as in goal) function.
    Additionally, the weakness of simulated annealing is not that it gets stuck in a local minimum. Instead, it's weakness is that it can only find the approximate global minimum. Simulated annealing is actually a strategy to escape local minima. I like to think of simulated annealing as "smoothing out" the loss landscape, so that peaks aren't so high (which traps the optimizer) but also valleys aren't as low (which makes the solution approximate).
    I think you did a really good job summarizing, especially since this isn't necessarily your field! :)

    • @reinerfranke5436
      @reinerfranke5436 3 роки тому

      The practical problem is more difficult than this. To get a min speed the max net length is one constrain but the average net length is for minimum power. So there is one objective but with an additonal constrain. In practice many.

  • @raylopez99
    @raylopez99 3 роки тому +21

    An even bigger bottleneck than floor planning is testing, which even more (I guess) possibilities than 10^9000 possibilities. You should do a video on this, it's been in the news.

    • @Asianometry
      @Asianometry  3 роки тому +8

      Oh like verification?

    • @raylopez99
      @raylopez99 3 роки тому +8

      @@Asianometry Yes. Test vectors to test every conceivable combination on a combinational and logic circuit is prohibitively large. There's a UA-cam video on this...that you can perhaps elaborate on...let me see if I can find it...ah, here it is, you did it! :) "The Growing Semiconductor Design Problem" Dec 5, 2021, maybe link to it.

    • @vatsan2483
      @vatsan2483 3 роки тому +2

      @@Asianometry Yes this is a big fish cause imagining ML to arrive at best test cases and boundary conditions is a grt tool

    • @williambrasky3891
      @williambrasky3891 3 роки тому +2

      @@Asianometry Recently I came across a video by one of the silicon focused creators.
      I'm paraphrasing (so the exact ratio is likely different than what I state here), but the gist was over the last decade or so, especially, verification has become a greater and greater resource hog. Most firms have something like 2-4 times the ppl working on verification vs design. It'll soon grow to become such a colossal undertaking to make current methods infeasible. Apparently, that's where they are especially concentrated on leveraging AI techniques. Makes sense. It's the sort of problem for which NN are well suited.

    • @waldemaro12345
      @waldemaro12345 3 роки тому +2

      @@raylopez99 I think John recent video was touching this subject ua-cam.com/video/rtaaOdGuMCc/v-deo.html

  • @8bitorgy
    @8bitorgy 3 роки тому +2

    Saying go is more complex than chess is like saying Cyrillic is more complex than cuneiform because it has more letters.

    • @Asianometry
      @Asianometry  3 роки тому

      Deep

    • @gyroninjamodder
      @gyroninjamodder 3 роки тому

      similarly a best of REALLY_LARGE_NUMBER of tic tag toe would have a large state space, but it is possible to play optimally.

  • @khatharrmalkavian3306
    @khatharrmalkavian3306 3 роки тому +2

    4:45 - You have the terms reversed here. Hill climbing is the naïve algorithm. Annealing is a modification designed to escape local optima. Annealing is a modification to the hill climbing algorithm where you sample the function with large steps, then on each iteration the steps get slightly smaller until you find a stable optimum.

  • @jerrywatson1958
    @jerrywatson1958 3 роки тому +2

    Another great topic and video! John you are on fire! Thanks for all your hard work.

  • @cuanclifford5922
    @cuanclifford5922 2 роки тому +2

    Google's Chip-Desigining AI*
    It's the difference between an AI that designs chips and a chip that is designing AI.

  • @beyondsingularity628
    @beyondsingularity628 3 роки тому +2

    Predictability/regularity and high-quality feedback (wirelength and other measure) of the chip designing field make it an ideal for machine learning! Very optimistic with the trend ❤️

  • @tonyduncan9852
    @tonyduncan9852 3 роки тому +1

    The future is unimaginable. Nearly. Cheers.

  • @lesptitsoiseaux
    @lesptitsoiseaux 2 роки тому

    Best new channel I found in 2020. Great job!

  • @vinicentus
    @vinicentus 2 роки тому +1

    Do you post your sources for the information in your videos anywhere? I would definitely be interested in digging even deeper into many of the topics you present.
    Great video btw👍

  • @TheEVEInspiration
    @TheEVEInspiration 2 роки тому

    About floor planning, seeing the problem I instantly see another solution.
    1. Start by generating for each block N solutions with different edge interface layouts (it does not have to be perfect in this stage)
    2. Do the usual optimization, but with the freedom to select the best fitting prepared versions.
    3. Once a good overal layout is found, optimize the interfaces between the blocks and then the blocks internals to fit that interface.
    Overall, its an outside-in approach, but with a pre-processing step that optimized overal layout first.

    • @slicer95
      @slicer95 2 роки тому

      Any floorplanning algorithm is not supposed to touch the blocks. The granularity should not go below the blocks. It becomes a much harder problem

  • @jessstuart7495
    @jessstuart7495 3 роки тому +1

    2% to 5% chip performance increase (power, or speedup) is well within the region of diminishing returns. The real advantage is the reduction in time-to-market.

  • @Bob-em6kn
    @Bob-em6kn 2 роки тому +1

    This is only the early studies. If this takes off, it would be revolutionary

  • @EyesOfByes
    @EyesOfByes 3 роки тому +3

    4:02 *NICE.*

    • @Gameboygenius
      @Gameboygenius 3 роки тому

      Ikr? I thought it was a missed meme opportunity, but Jon had us covered.

  • @chrisfisher6700
    @chrisfisher6700 3 роки тому +2

    Another brilliant video. Much appreciate your excellent work. Quite curious your thoughts about how long it will take for quantum computing to make an impact on floor planning? How far do simulated annealing solutions such as DWave need to improve before they can be used more efficiently than ML?

  • @fischX
    @fischX 3 роки тому +1

    The shocking thing is not that it is good, but that it is good at basically the fist shot. Compared to chess it's the "look it beats *a* human" moment, probably plenty of space for improvement on speed and quality.

  • @ddoice
    @ddoice 3 роки тому +1

    At 4:05 just to give some context, the estimated number of atoms of the whole universe is 10^80

  • @GoodBaleadaMusic
    @GoodBaleadaMusic 3 роки тому +1

    This could design rolling papers better. A needed upgrade we all crave to be sure.

  • @raphaelcardoso7927
    @raphaelcardoso7927 3 роки тому +2

    You said that according to an Intel study, 50% of the power is spent on interconnect. Do you have a reference for that? I'm doing a study in interconnects and I'm finding it hard to get my hands on those data. Thanks!

    • @reinerfranke5436
      @reinerfranke5436 3 роки тому

      In 5nm and around connect is more than 90%. In the 90s or in discrete board design it was another way around.
      Chiplogic is defined by text written expression and synthesized to gates. Both do not carry the information about place and distance. But both define the performance. To me it seems simpler if the logic synthesis guide the logic definition calculating direct from the logic expression using the performance metric. AI could possible then make expression transformation for a better metric. This process is manual done by chip architects and guided by logic equivalence checkers.

  • @stevegunderson2392
    @stevegunderson2392 2 роки тому +1

    Think how much coffee will be saved by floorplanning with machine learning! I have been doing floorplanning for over 40 years I really like coffee!

  • @anterprites
    @anterprites 2 роки тому +1

    6:51 But designs exist! Yes, they do :D

  • @dmitriikruglov320
    @dmitriikruglov320 3 роки тому +1

    I guess in analog IC design where you start with the transistor model rather than a logic block these ML/AI tools will come much later. A different frequency, a different spec, a different application - for each of those you’ll often have to change the whole circuit in a non-trivial way to accommodate for it.

    • @reinerfranke5436
      @reinerfranke5436 3 роки тому

      Ok, to put a little water here: Look at an Opamp. By specifying the databook specs you can select a minium topology and make a numeric dimension of the devices. No need for AI. It would be far easier to have a "topology google search" for all past built circuits and to apply them to your problem. Its simply a secret curtain which lead most analog IC designers to reinvent a solution.

  • @martinsimlastik5457
    @martinsimlastik5457 3 роки тому

    Great summary video!!!

  • @avinashdas1013
    @avinashdas1013 3 роки тому

    Lovely documentary on trending topics in chip industry.

  • @deliciouspops
    @deliciouspops 2 роки тому +1

    It would be pretty accurate to compare machines to human on performance per watt ratio :D There is a reason we do not use calculators.

  • @umountable
    @umountable 2 роки тому

    4:42 Simulated anealing is not greedy. in the context of computer science algorithms, greedy means, that the algorithm will not plan into the future when making decisons, but select what looks best right now. That will often not get you to the globally optimal solution.

  • @leonjones7120
    @leonjones7120 2 роки тому

    Great explaining of this technology! great stuff!

  • @depth386
    @depth386 3 роки тому

    After learning about boolean gates I started working on 4 bit CPU sub-components like Adder, Comparator, a memory piece, etc. Didn’t get far but it was a good intellectual activity

  • @tobiasmmueller
    @tobiasmmueller 2 роки тому

    4:05 OMG, it’s over 9000!!1!

  • @helmutzollner5496
    @helmutzollner5496 3 роки тому +1

    ... and I love listening to your content! Thank you John!

  • @StanUlch
    @StanUlch Рік тому

    Logistical algorithms could play a useful roll in determining parameters of significance between nodes. just an observation.

  • @snawsomes
    @snawsomes 2 роки тому

    Would be interested to see how this could be used for indoor aeroponic farms.

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 3 роки тому

    Hello Mr. John. In a previous video, you talked about the validation problem. Do you think this technology gonna help in that?

  • @leyasep5919
    @leyasep5919 3 роки тому

    Please ! MORE videos on this subject !
    Thanks :-)

  • @dongshengdi773
    @dongshengdi773 3 роки тому +2

    i have an AI friend …
    the perfect partner .
    She can do anything for me;
    Cook, wash the dishes ,
    mop the Floor , do gardening,
    even gives me a massage.

  • @lidarman2
    @lidarman2 3 роки тому +3

    @12:35. Having a maid is weird enough but a maid quarters without a shower? :P

    • @alexmartian3972
      @alexmartian3972 3 роки тому

      3 showers and not a single bathtub on the plan,

  • @jack504
    @jack504 2 роки тому

    Could you do a video about the Tesla Dojo? It would be great to know more about it, e.g. efficiency for machine learning Vs other commercially available products, whether Tesla poached expertise from elsewhere or outsourced some of the design?

  • @SkillsToLearn
    @SkillsToLearn 2 роки тому

    Thank you for the great video!

  • @scottspitlerII
    @scottspitlerII 3 роки тому

    5:00 you are literally talking about the halting problem in computing, annealing is probably a non NP or and NP hard problem

  • @odaialzrigat
    @odaialzrigat 3 роки тому +1

    Wonderful content

  • @mhassaankhalid1369
    @mhassaankhalid1369 Рік тому

    great video Jon

  • @bassmechanic237
    @bassmechanic237 3 роки тому

    Awesome video subjects and content

  • @chaitanya.pinnali
    @chaitanya.pinnali 3 роки тому

    Can you please make a video about Lam Research as well?

  • @blengi
    @blengi 3 роки тому

    Can the AI determine patterns in local minima/non local minima to the point can say, generalize and efficiently encapsulate these into some "simpler" higher level design principles/methodologies, such that one doesn't need the AI tool post discovery, to perhaps aid evolution of designs forward from different perspectives? Or do engineers just interpret the results from the various simulated metrics and thus only optimize over the numbers?

  • @AdityaChaudhary-oo7pr
    @AdityaChaudhary-oo7pr 3 роки тому

    that was amazing information !!!

  • @davecool42
    @davecool42 3 роки тому +1

    FYI your microphone is still buzzing. Great video nonetheless!

  • @nicholas6186
    @nicholas6186 2 роки тому

    It looks like the ball throws the character instead of the other way around. 8:25

  • @johnfilmore7638
    @johnfilmore7638 2 роки тому

    Using the last example of AI design of a home floorplan, its hard to see this being more efficient without spending inordinate amounts of time defining the constraints of each element in relation to each other, before running AI calculations.
    Human intuitiveness for good ergonomics, where receptacles and door locations and counter-heights & beds to night-tables, and outdoor walkways down a grade, for example, are mind-bogglingly challenging and time-consuming to determine & quantify, for developing constraint rules for the AI engine.
    Humans are creatures of habit, and we have intuitive ways of navigating, if AI determines a walkway grade & width is most efficient for human anatomy to traverse, but it is perceived to be too narrow or have a perceived unprotected dropoff for example, then you will not have a happy homebuyer even if they can learn to feel safe walking it at night.
    I believe there will always need to be a hybrid of human design & Machine-learning,
    using home floorplan design as an example, AI would be great at taking a human-designed floorplan, designed using building standard blocks & assemblies: industry std size trusses, 2x4s, drywall, etc,
    an AI engine could use similar construction-code written as constraints, generate an optimum routing of electrical, gas, plumbing, etc.
    An obvious constraint is designing around industry-standard material sizes to reduce the amount of custom-cutting needed, a buyer wanting a "custom-home" probably doesn't mean they want a home that can't fit industry-standard fridge, freezer, ducts, sub-flooring, etc, there are specific features which are perceived as custom, some may actually need to be custom industry nonstandard, and those elements would need to be human-designed unless this was simply an exercise in "seeing what AI gonna make".

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 3 роки тому

    thank you and posted to reddit

  • @userme2803
    @userme2803 3 роки тому +2

    Does that mean Google is the future company that we should invest in. I feel the growth of this company is unlimited.

    • @soup100
      @soup100 3 роки тому

      It's the best pure monopoly today, IMO. ASML too.

    • @rjhacker
      @rjhacker 3 роки тому +1

      Nvidia and others have the same techniques, the future success of each company is not just the fundamental technology, but the business variables and the decisions of individuals, which is chaotic. It would take far more in-depth research to get a feel for the winner of AI in 10-20 years, and even then there is no guarantee that they can profit from it long-term, or if it will turn into a generic commodity technology that simply benefits everyone.

    • @tardvandecluntproductions1278
      @tardvandecluntproductions1278 3 роки тому

      Looking at their stock price, they sure know how to keep growing and growing.

  • @nonetrix3066
    @nonetrix3066 2 роки тому

    Ah had this idea seems it's already being done :P

  • @vishnusureshperumbavoor
    @vishnusureshperumbavoor Рік тому

    Now the trend is back

  • @guard13007
    @guard13007 2 роки тому

    An important thing to note: It doesn't matter is the AI version gives a result that's 5% worse than a human or existing tools if it can go faster than the human or existing tools. Assuming for a moment existing tools are about as fast as the AI method, use both in parallel and take the best, and you don't have to care about spending the time a human would.

    • @slicer95
      @slicer95 2 роки тому

      Wait till a post silicon bug is discovered and you have to go back to PD to isolate the bug

  • @rossadew4033
    @rossadew4033 3 роки тому

    Off to TechTechPotato's channel.

  • @quantum7401
    @quantum7401 2 роки тому

    8:27 YES!

  • @proxy1035
    @proxy1035 3 роки тому

    chip design and custom ASICs will always be one of those far dreams of mine that i will never fulfill because it just looks really really complex and is pretty expensive.

  • @ktofa3822
    @ktofa3822 2 роки тому

    For my own perfection, i’m looking for analog ic design courses?.Thx

  • @augustday9483
    @augustday9483 2 роки тому

    A lot of this feels like the Traveling Salesman problem. There are NP hard math problems here that humans have simply not been able to solve (and maybe are unsolvable in non-factorial time).

  • @khatharrmalkavian3306
    @khatharrmalkavian3306 3 роки тому +1

    This is a good example of why AI can surpass humans in expert system domains. You pointed out that the AI found solutions that had performance comparable to those generated by humans, but you also point out that it generated a solution in 24 hours whereas the humans took 6 months. The thing is that the cost of the AI is only electricity, and it can generate 180 competitive solutions in the time that it takes a team of salaried humans to make one solution. In that case, the odds that one of the AI solutions is going to be the best one are 180/181 (99.45%), and the AI will do this at a fraction of the cost. Additionally, in the rare cases where the humans win the race with their single entry, that entry can be added to the AI's training set for an immediately improvement.
    Moreover, the efficiency of the AI can improve substantially with application specific hardware, and meta-analysis of its work can potentially be used to generate "unnamed" rules that can radically improve its efficiency and/or quality. Humans can also improve, of course, but the way our minds work is largely incompatible with unnamed rules (allowing for zen masters here), so there's a lot of extra overhead involved.

    • @LHKKKing
      @LHKKKing 2 роки тому

      and once AI evolved a few generation ahead of human, the next-generation concept will be hard to comprehend by human.

  • @harrykekgmail
    @harrykekgmail 3 роки тому +6

    floorplanning like predicting the weather is a job for quantum computers. probably in 5 to 10 years' time, it will be resolved in minutes to everyone satisfaction.

    • @PlanetFrosty
      @PlanetFrosty 3 роки тому

      Quantum annealers exist now and are growing in qubit numbers. We are developing new quantum process chips based on a hybrid technology.

    • @tonysu8860
      @tonysu8860 3 роки тому

      Only if you want or need to laboriously calculate every possibility to arrive at the one true solution.
      Until then, the AlphaZero AI described in this video can or should do a very credible job by a series of expert guesses, pruning the decision tree of bad lines without having to calculate them fully to establish how bad they are.

  • @rem9882
    @rem9882 3 роки тому

    Have made a video on the European risk v chip by Sipearl?

  • @onetruekeeper
    @onetruekeeper 2 роки тому

    The A.I. is designing the chips within the set of rules programmed into it. It cannot design outside those rules since machines cannot consciously decide or create.

  • @Weathering123
    @Weathering123 5 місяців тому

    Make a video about Quantum processors, please 🙏

  • @呂奕珣
    @呂奕珣 2 роки тому

    does apple M1 using this?

  • @bujin5455
    @bujin5455 2 роки тому

    I think the term "AI" is eventually simply going to mean ML. It's joked that an approach is only AI while it hasn't been adopted. Most people wouldn't consider the A* algorithm or genetic algorithms or Bayesian numbers or B-tree Searches to be AI at this point. However, neural network based ML is really truly AI, as it replicates the processes nature uses to generate intelligence in the first place. Everything else is simply search algorithms with some innovative flourishes.

  • @mostlymessingabout
    @mostlymessingabout 2 роки тому

    Going deeper... 😎

  • @JinKee
    @JinKee 3 роки тому

    “droids building droids? how perverse!” - C-3PO, The Revenge of the Sith

  • @vermilli5170
    @vermilli5170 3 роки тому

    With big tech such as apple,google, and amazon starting to design their own chips do you see them taking market share from companies such as AMD/Intel?

    • @SebastianRosca
      @SebastianRosca 3 роки тому

      Considering the fact that apple made millions of mac's with intel sillicon and now they have their own chips, in a way you can say that the market share has changed. It's valid for google's and amazon's data centers that used to rely on xeon processors. From a direct consumer perspective, we won't be seeing the new equivalent ryzen or core i7 from google or apple, but when you think about the fact that a datacenter has thousands or tens of thousands cpu's and gpu's and there are hundreds of such datacenters scattered around the globe, it's easy to see that intel especially has lost quite a bit of ground

    • @Nadox15
      @Nadox15 2 роки тому

      @@SebastianRosca It will be interesting if the x86 architecture will even be relevant in the next decades. Intel is lucky that so much software is based on their architecture. If more and more software gets ported to ARM or even risc-v intel will lose more and more market share (atleast with their own architecture).

  • @stimpyfeelinit
    @stimpyfeelinit 2 роки тому

    nice pic at 7:07

  • @gregdee9085
    @gregdee9085 Рік тому

    Has always been the case .. For decades... Same tech that auto routers been using for PCB.. or FPGA routing ... Etc etc..

  • @tristanwegner
    @tristanwegner 2 роки тому

    This is one of these area where a superhuman AI has much potential for iterative self improvement on the path to the singularity.

  • @KoviPlaysPC
    @KoviPlaysPC 2 роки тому

    love the video!

  • @dec13666
    @dec13666 3 роки тому

    Me, an Electronics Engineer turned into AI Researcher: _Beautiful 🥺👍_

  • @cougarten
    @cougarten 3 роки тому +6

    just fyi: your volume is a bit low.

    • @anonimuse6553
      @anonimuse6553 3 роки тому

      I found the volume much better this time.
      Curious.

  • @markissboi3583
    @markissboi3583 2 роки тому +2

    How small is a silicon circuit you mite ask ? 🙃 Try this idea for size .
    Focus a laser beam ----------------------- from earth to a Finger nail on the Moon and lithograph a circuit line on it That small
    So have they reached the atom limit on size - never say done always some guy with a new idea
    2035 some tech scientist runs into work at some Lab i got it . . . i got it
    Like Tagart in blazing saddles :)

  • @video_explorer
    @video_explorer 3 роки тому

    Over 9000!!!! Liked !!

  • @fg786
    @fg786 Рік тому

    We should not forget that the neural networks solving difficult problems run on machines that are vastly more power hungry than the human brain. AlphaGo ran on 2000 CPUs and like 400 GPUs, each probably using up 200 W of power at least. A typical human runs on 100 W, while the brain doesn't use a third of that power.
    There is a long way to go in this regard.

    • @JameBlack
      @JameBlack Рік тому

      Typical human cannot multiply two 3 digit numbers.