of course ai nowadays cant cover the WHOLE process of design, but can automate and suggest actions in the way, that is also a great advantage to accelerate further innovations.
I'd like to see a deep dive into the possibilities of nanoimprit litography in the future nodes such as "2nm" and similar. Canon might be on the way back, without fancy mirrors from Carl Zeiss!
I'm a 3D artists/designer in Unity and not a programmer. I've found that for simple scripts it works for me. I've learned it's about how you ask, and how to explain your need in Unity terms. My partner and programmer has become a GTP Ninja.
This process of giving the LLM "hats" that it wears for various steps are showing effectiveness in many fields, from coding games, to writing novels (yes, you make it a slush editor, then content editor, etc etc), and now to chip design. We need to think of prompting large problems as more like how you would design a company, having multiple roles which either specialist AI agents can be slotted into or a general AI can be guided to wear a hat for that role. This is how humans naturally think anyway, utilizing latent space activation in the mind, and it's how we can take our utilization of AI as partners to the next level.
This is already part of the discussion and has for a bit, but we are finally seeing more talk about the specialized agents. Most conversations about this using a center general agent that connects with a number of specialized agents
@@mc9723 Exactly, so, for a novelist I know he says he personally acts as the general agent in the center which organizes and connects the specialist AI agents that he's created around himself. In this way he's largely replaced what his publishing house would normally provide to him, including the marketing team. He workshops his own work whenever he likes. No waiting months for feedback, which helps him keep his work flow going. It's increased his productivity by an order of magnitude. Until we get AGI, the human functions as the general AI agent in the center. Leave it to a sci-fi writer to figure out how to get it done.
@@armartin0003 «Leave it to a sci-fi writer to figure out how to get it done» -- Sorry, but the problems were, and always will be, solved by those that live on our planet. Sci-fi is nice, I read it all when I was young. It can be useful as long as one is able to distinguish between fantasy and reality constraints. So while sci-fi is useful in enriching the minds, sci-fi writers are harmful when they are not able to understand what they are talking about.
@@voltydequa845 Writers live in the clouds, engineers live on the surface of the earth. However, without a greater view of the realm of the possible provided by creative minds, engineers would still be designing better plows.
@@armartin0003 «Writers live in the clouds, engineers live on the surface of the earth. However, without a greater view of the realm of the possible provided by creative minds, engineers would still be designing better plows.» -- Nope. Mine was not against fantasy, but against confusion where commercial fantasy-hype is taken as reality. Writers write their narrative, and analysts write their view on where the world could be headed. Or, said in other terms, my advice regarding intelligence in sci-fi is that of avoiding the cognitive confusion that arises when a ParrotGPT is perceived as a form of intelligence. So my wasn't about clouds, but about planets, where cognitive confusion is presented as fantasy. Sci-fi writers should try to understand what-it-is-all-about (gpt, llm, etc) for the sake of avoiding extra confusion. In those pattern matching technologies there's no whatever cognitive ability. And there won't be since the bridging with the real AI - that can only be implemented by the over-complex (in terms of definition, as well as calculus) symbolic logic - is impossibile since that the gpt llm can't produce (formal) knowledge, and the symbolic logic part can't use, and anyway won't need, the gpt llm part, since the analyse and generation would be logic. "Our Martin is eating his ice-cream" , where the words and their order are chosen not for their meaning and for grammatical reasons, but just because in 99.999% of the texts in appears this way. Symbolic logic is so hard because it has to do with intelligence. All the rest is talking the talk by resorting to imitation. p.s feel tired, hope to have transmitted the essence of the difference, and so why all this nonsense-out-of-excess-of-hype-and-disinformation.
I wonder how updates to LLMs (like increasing the context window size or the parameter count) will change their ability to generate chips. My guess is the biggest leaps will come from these updates for a while, not necessarily the additional training in chip design specifically. Definitely an interesting idea though!
@@dchdch8290 In the short term, yes. In the long term it's likely that the best models for chips will be a general purpose one. In the same way that Google's RT2 performed on average 50% better with the OpenX dataset (a HUGE text dataset), and even outperformed specialized robot AI's.
Spot on! LLM's make the entire chipmaking process faster and better. This leads to better chips which will allow us to power even larger LLM's. But there is still a lot of room for efficiency in the LLM's. Good thing LLM's can help us make more efficient LLM's! It's all snowballing, and every tech-related sector will massively increase in productivity, value and performance over the coming years!
i think we can leverage chatgpt for any system and design a system given enough context. i work for a company that uses chatgpt apis for replacing teachers in some form. what i did for them is just break down various steps that a teacher would perform fundamentally. similarly i think if we can replicate this approach in chip design we can accomplish at-least repetitive and common tasks. i hope we can solve the "creativity problem"
«i think we can leverage chatgpt for any system and design a system given enough context. » -- Too vague. To obtain what, exactly? ---- « i hope we can solve the "creativity problem"» -- Creativity concept implies thoughts that have no precedents. But GPT is just a parrot technique. What can (!) be seen as creativity can come from calculus on symbolic logic, so from almost total calculus (of all the possible alternatives) that could come up with solutions that human couldn't find since too much and too deep. Anyway nothing to do with gpt parroting.
@@voltydequa845 I don't know what you are referring to when you say about calculus and other techniques to solve creativity problem. Right now they are just using different words and sentences to solve repetitiveness. Coming to an earlier point, what I mean is if we break down complex tasks a human would traditionally perform to complete the task and we can replicate it in some form that llm would understand, then it will be able to do the task with very enhanced capability.
@@vatanrangani8033 «I don't know what you are referring to when you say about calculus and other techniques to solve creativity problem. Right now they are just using different words and sentences to solve repetitiveness. » -- Give a fast look at Prolog, for example. As for the different words, I guess they are just resorting to pattern matching that cross synonyms with their usage frequency. It is just another level of pattern matching. Back to calculus - that you seem a kind soul - it is also useful to think in terms of comparison between algorithmic chess and gpt chess. The first calculates by means of ramification and evaluation for the sake of the goal of winning, while the second just pattern matches using its database where a response move is based upon probability out of victory (and of course not necessarily out of that move). So the first one calculates and evaluates, the second one blindly imitates. ---- «Coming to an earlier point, what I mean is if we break down complex tasks a human would traditionally perform to complete the task and we can replicate it in some form that llm would understand, then it will be able to do the task with very enhanced capability.» -- I do not believe so. I do not believe that complex tasks in disciplines like teaching (and similar) can be so easily fractioned. We should keep aware that we are talking about teaching under the form of talk, where too often goes "whatever way you say it". I mean there is no, and there cannot be (or it would be too difficult), a measure of the quality of teaching. As for me, I prefer the google search engine's way because the alternatives keep open my mind, bring new ideas, let me preserve the critical spirit. Anyway a very complex topic.
Very interesting, and conceptually it makes a lot of sense that you could leverage the sort of AI which has been proven at playing chess and go, and make a game of optimising circuits. Going the other way, and hoping for generative / creative design of chips in what sounds like more of a top-down approach, that sounds a lot more optimistic by contrast (to a lay person anyway). Could they build on the game, take collections of optimised circuits and build larger constructions instead? Move upwards from the bottom... Feels like that might better suit current capabilities.
I already used ChatGPT during my Digital Systems course to help me with VHDL language, but certainly it's not perfect and you still need to know the concepts. I made a VGA Controller as my first project, and now I'm working in the Snake game written in assembly language running in a MIPS processor inside of an FPGA.
I observe that AI more accurately means "Augmented Intelligence". This work is a perfect example of why. I would look at training a set of domain specific AI. One skilled in lithography for example, another in architectures. Then cascade them so that one is a controller AI that can create prompts for the others, collate the results, request reviews by spawning other instances of each domain to check the work. Hugging Face released a paper showing this kind of agent spawning other domain specific agents to farm out a more complex task to instances that are really good at each bit. In terms of training I would literally use the materials we use to train a human to be skilled at x. Using the tests to test them, but include for example every paper handed in for assessment by students. There would be shared knowledge they all would have, such as physics, logic - to be able to communicate amongst each other. Each item of knowledge would need context and relationships - such as physical laws, materials, tools, process, hard, code, soft etc There is a reason why there is more than one of us. makes no difference which instance of intelligence one of us might be,
Thanks for the great coverage on this, Anastasi! It's scary how quickly things can move in the world of AI. On a related note, I've been wondering something that I hope you might be able to shed light on: If a company that makes an AI chip decides to use Intel's fabs to make them, could doing so allow Intel to simply steal the critical parts of the design for use in its own AI chips? Is the only thing stopping them a sense of honor or is it not nearly that easy to do so?
@@mvasa2582 Oh I don't mean lift the whole thing wholesale. More like, "Hey we seem to be stuck at X but this chip here that we're fabbing solved it in this way, maybe we could do something similar?" Basically, protections aside, I'm wondering from purely a tech standpoint, can it be that easy to steal ideas if you are fabbing the chip or is that pretty much impossible. Essentially, could the fab "pay the fine" if it meant getting through their wall.
@@garhong9125 point. Any technology can be copied replicated or stolen. I believe with AI, we may even move to a world without IP. Innovation at the speed of light! 😂
This is amazing but I saw this coming as it evolves. You can imagine what else can be done in the next 6 months to a year. Thank you as always your videos are amazing and provide a professional but simple way to understand. Even if I don’t have an engineering degree
«You can imagine what else can be done in the next 6 months» -- People with void memory think in terms of "what else... " without reference to "what so far..." So after six years (10 times more), without whatever real (not just marketing vague hype) progress happening, they will still be here to repeat their parroting trust in the bright future based on the bright future of the past six months.
The cool part about Chat GPT is, you might be able to start getting it to program by its self, just by talking to it:) Thats fantastic:) Levels of coolness, Rising!
In many cases we can not have an efficient design of a chip today. We have to have the matrix design patterns wasting millions and millions of transistors. Matrix patterns are easy to understand, microcode and troubleshoot. It would be amazing if we can have AI to optimize chip layouts and nano structures. It is not only the people (numerous teams) that spend year or two, it is also very long computer optimization and calculations. AI can lead us to the next generations of chips, faster than ever.
Anya, great show, today! I think going for PHD, would be a great idea, if you already have your financials in order and under contract control, of course. A reason, might be, you are so good lecturing on UA-cam, that teaching in academia may very well be is what can truly fulfill you. Just a thought.😊
She has already a valuable PhD relative to her public. Share with others. Help share the Words Of Anastasia. When she gets hundred thousand subscribers she will start her own academia for all of you. And you are going to know much more concrete, of it all, than getting lectures in a real academia.
On the Circuit Neural Networks: That is the absolute opposite corner to a constraint random verification approach I would say. You cannot verify that IP 🙂
makes sense, Tenstorrent has talked about it and is training models to help them design their chips. I bet Nvidia and TSMC are already deep into this research.
I spent some time with ChatGPT to try and design a neuromorphic chip (because we really need one for the masses) and it helped.. mostly though, I feel like it taught me a lot
a chip is only a lot of blocks wired together llms use statistique to place word to give it meaning so if a llms is teach on transistor block it may be possible to use it to build chips
You have an interesting channel. But I have doubts about chatgpt chip making. For this, it is better to use the Anthropic API. In my opinion, he is better versed in modern nanoelectronics. In particular, Anthropic AI chat offers the most advanced technical solutions in photonics and molecular nanoelectronics. Chatgpt can simply be used additionally for additional data processing. In general, the idea is interesting. But for a full-fledged implementation, it is better to use a multi-agent system with a certain number of defined modalities.
This scares me very much because they showed this technology in a closed event back in 2012 and they are revealing this "now"? What else are these companies doing behind the curtain that we don't know about?
It's only a matter of time before AI can do better than humans. It'll be able to run calculations and test design patterns much faster than a human will ever be able to. I'd assume it might not happen until AGI or the training as you say is guided in that particular direction. (Kinda like cancer detection for AI)
Is there an AI that can design optimized PCB layouts, given a circuit diagram and output Gerber files? Because I think it is a massive pain to do by hand!
As AI become smarter. Like ASI Then eventually it will be able to not only design. but to test it without actually making the hardware, just reviewing everything.
Long long way to go… well, I agree. Just few days ago, I watched a battery scientist asked ChatGPT some questions about lithium ion battery, and she spotted some flaws and issues on another UA-cam channel. I also left my message to her and said I think ChatGPT isn’t so accurate now, because I also asked ChatGPT some questions about industrial furnaces in different ways, and I found out some contradictions in February this year. I knew there’re many different backgrounds on those scientists and professors behind the scenes in ChatGPT. And those scientists were not battery professors whom received Nobel price like John Goodenough… Different minds caused different output after all. But I think maybe ChatGPT has more professional background on these circuits design, I guess. Maybe those experts are those whom expertise in electronics circuits and its related fields the most.🤔
I find that ChatGPT is very good for specific functions. So if you know what you specifically want. You can build it block by block by creating and implementing supporting functions without ChatGPT even knowing that its for. For example, I used it to create a function that leverages antonyms to change the sentiment of a string to positive or negative.
As a chip designer I do find vast differences in quality between different designers, one extreme example was I wrote an audio extraction logic block for SDI that was 1/100th of the size of paid for IP. The problem wasn't clever or efficient implementation but a different thinking about the top level, this is something AI is still weak in, but I do expect AI will be a useful tool to help designers in the current generation. The place where implementation quality would be key would be AI controlled full custom digital design, the difficulty here would be immense though and produce designs that likely only the AI itself would understand (a human might take years to decode it), best-o-luck finding a bug in that :)
Now that LLMs like LLaMA are smaller and more efficient, it wouldn't be hard to imagine using three different LLMs to design a chip. Use one with GPT-4V to recognize, label, and then analyze microchips. Use another to be an adversarial agent that does QA. Then, a third one could be a custom in-house LLM that uses that previous data to model entire systems before any prototpe is fabricated.
Hi Anastasi, I'm aim to become an engineer working in the chip design industry, what Xilinx/Alterta board should I buy to learn?, my budget is under 150$.
The first question I would've asked is whether the form of knowledge representation is appropriate for the given task at hand. LLM's would not be the first thought, that comes into mind. It's oriented towards language ("predicting the next word", or riffing on more abstract themes). I would've thought for chip design, one needs, at a minimum, 3-D spatial awareness. Just as for AlphaFold, you need to build in some priors regarding structure biology. ChatGPT (LLM's) would not be the right tool for protein folding. Perhaps LLM's can help with high level themes, at the idea level. But that's about it. But it's not really aware of the physical world. (The UA-camr AtomicBlender, tried to use ChatGPT to design a novel nuclear reactor to see it would replace nuclear engineers someday. Not anywhere close.)
It should learn LabView first, then it will be simpler to move to hardware implementations since it knows the data flows and processes needed in each node.
My experience with GPT-4 writing code even simple code was not stellar at all. Draw a tree using p5js and fractals, total garbage came out. (angle but no -angle so all the limbs were on the same side of the tree) Same request but asking for GPT to write the comments for the code and it wrote the documentation and the code that totally worked. So asking for the thing didn't work for me but asking for the stuff around the thing produced high quality docs and a working program. btw gpt threw in the code without me asking, I just asked for the documentation not the code.
If AI can be used to design simple chips well, that might be the best start. Of course, there is no need to have AI for this, but if it can understand all of the RC propogation delays, tolerances, etc., and can scale a design to use different nodes (lambda), then that creates a foundation that you can trust, while waiting for AI to advance enough to be ready for a cpu, or gpu, etc.. Perhaps it could proove its worth with analog chips, or substrate interconnects and logic between different tiles. Or in optmizing microcode.
With the continuous advancements in artificial intelligence and the accumulation of data, it's inevitable that AI will significantly influence the chip design sector. This technological evolution promises substantial benefits in terms of efficiency and quality. However, from a philosophical standpoint, human attributes such as creativity and empathy remain irreplaceable, for now. AI will have a glorious future.
Once we will reach the point that it will understand the link between hardware and software, I think it will be the beginning of the singularity and perhaps a year of two before we will be left at the event horizon.
Not surprising since the basic design blocks of computers are over 60+ years old. Most of the layout was already automated, with engineers hand positioning blocks, busses and interconnections, letting to tools do the optimal routing. The only thing that makes it better is more pipelining and parallelism that gets around basic design bottlenecks. AI can test thousands of tradeoffs to select the best ones that engineers don't have the physical time to test and analyze.
Making a LLM using all the legally obtainable chip designs, including old ones, would likely be far more effective at this though the training process takes time.
I don't necessarily agree with giving chat GP other devices to look at per se or what they look like because then you are narrowing what it will. Do.i would think the best way to go about having a identity. Something is give the parameters capabilities in restrictions of atranister. Then have it design its own off of that off of what we know to be true on the limitations of atrium sister but if you start bank this is. What others look like? It's gonna. Look at that and it might model off of that instead of generating something new
IMO the problems and difficulties described in this video are temporary and unimportant relative to the likely capability of AI to design chips well almost moment by moment. That last bit referencing Deepmind is important, IMO it's probably one of those incredibly important technologies that's hard to underestimate its significance. I lean on how that technology has been implemented in the gameplaying world, creating a world chess champion after about a year of training. What sets Deepmind apart from all other AI is its approach of 100% machine self-learning, to even teach itself how to learn without the slightest guidance or instruction from any human. Given only the objectives and rules of the endeavor, in a game playing environment it begins like a newborn infant, failing at the most simple attempts to reach its given goals. But like human experience and learning, by trial and error and learning from its mistakes the AI eventually should be able to attain at least 97% competence which is the generally accepted threshold to be called world class, at least equal to and superior to nearly all humans that have highest levels of advanced learning, years of experience and recognized by colleagues. The unexpected result of Deepmind is real creativity, innovation and artistic elegance which breaks the paradigm believing machines are only capable of what they're instructed and are incapable of imagination. The questions I'd ask of those evaluating AI's ability to design computer chips are 1. Have you given the Deepmind approach at least a year or so of 24/7365 training to build its neural network algorithm? 2. Is the training truly "AlphaZero" without human intervention or did someone inject some kind of human input maybe to hurry the process but likely with adverse consequences? 3. Do analysts realize that Deepmind has changed the definition of what a world class solution is? An interesting revelation from the gameplaying versions of Deepmind (primarily Lc0 in the chess world but seen elsewhere as well) is that there can be many versions of world class and some aren't going to make sense to humans. Human experience, education and training tends to produce a common human way of approaching problems and therefor often result in similar solutions a human expert can easily recognize expert features and agree the solution is "expert" or world class. A completely independent and inhuman approach to learning and thinking though can produce a very different looking approach, but still be excellent and even elegant in its solution, at least equal to the best that humans can conceive. It also sounds like someone is applying Deepmind to optimize designs created by humans. If the chess playing Stockfish AI is any example, that's the wrong approach because it probably preserves or builds on human concepts which may be unwittingly flawed. More likely the process should be turned around... The AI should build the basic design and then if desired a human should be employed to audit and optimize further. This is all very exciting. Keep in mind that Deepmind and similar neural network AI became practical only when the hardware became powerful enough to implement advanced AI approaches like the Monte Carlo evaluation method around 2017. It was only since then that we've had this explosion in AI uses that have the potential to radically change the entire human experience for everyone but the most technologically disadvantaged. Large language model AI has the potential to be another seminal advance like Deepmind because language is so closely associated with and is the gateway to expressing and exchanging thoughts and even the most basic process of human thinking. The human mind has been and is still one of the unsolved mysteries of science, and this type of AI is more closely associated with the science of being human than anything else that has come before.
I think in 2-3 years AI will do most of the work in chip design. People will just need to "edit" the results. Just like translators today. Most of the work is done by machine translation systems.
Las descripciones matemáticas son más optimas que las descripciones verbales. Con un LLM matemáticos sí se podría optimizar y quitar los cuellos de botella que no resultaran obviios con descripciones de lenguaje natural.
Mathematical descriptions are more optimal than verbal descriptions. With a mathematical LLM, you could optimize and remove bottlenecks that weren't obvious with natural language descriptions.
Just think, if all the leaked/stolen tech that is out there got gobbled up into GPT4. There is a lot of proprietary info that could be used for training. A lot is already leaked from hacks. It's un-ethical, but someone is going to do it sooner or later.
ChatGPT as a LLM is not creative, like a human brain. It just reproduces ideas from other sources fed into their library. Best case, it combines such ideas or runs optimisations. Synopsis already provides such optimisations for lower power-usage etc. Such LLMs must be fed with thousands of examples, to get fuctional output - but there are just some useful chip-designs out there. Even Intel, AMD or any other company in the industry has so many independet designs - the use of evolutions of former designs is more costeffective than starting new from scratch. Every simple trick a company lears, stays a kept internal secret. An AI with the knowledge of all chip companies together, might result in the best chip ever? Maybe taking a step back and rethink, leads to something good enough for today - like the RiscV-ISA led to several designs from single-core- to combined multi-core-layouts. We noticed what Apple did with ARM-Cores licensed and integrated into A1x or Mx. RiscV is much younger than Advanced RISC Machines (1985), so we have good expectations ...so please do not follow Dec Alpha
I was just working on this. My go at it this morning was self limited to using standard language-definitions. Morning warmups. Framing within a Mind-Body classic badcode-misdirect. How do you "solve" the Mind-Body disconnect? Increase sensual awareness, teaching skillful means in managing subtle biochem/neurotransmitters, on a moment by moment bias, through events requiring focus, impacting the "felt sense of being in the world", our sense of Mind. AI needs that too. So far AI is not sufficiently connected to sensors, sensors that include self surveillance. My approach to AI is through HMI, Human Machine Interfaces. This approach address the input requirements. (I've been sick for a while. Healing is a Miracle.) Meaningfulness includes the event, the context, and the "key" the "scale". Multimodal modeling.
If one Ai supercomputer works with another to test another and they both have U.S. government military & intelligence level security clearances, their access could teach the programmers in a short time.
They could make that smarter easily too chat gpt is very stagnate the future will indeed be wild. We can't push button yet btw but that actually easy. The problem wish push button is reliability pedictiability etc other variables. Nothing is really impossible tho if your interested in that go for a push button haha how well it may perform is who makes it the best till agi does.
Chatgpt is not good at generating code. It is inaccurate and cannot generate good structures. You have to find all the issues yourself. If you ask for test cases they are not complete. In my estimation, as an experienced developer, it performs much worse than a junior programmer and is not able to learn from mistakes you tell it about.
HELP ME, what is this ONE word she says? ""the approach they use is called circuit neural networks. A new type of neural network which turns wires into BLANK and logic gates into nodes."
imagine the first ONSLAUGHT CPUs (machines designed by machines) they might even figure out how to make A CPU that is a Solid crystal and envelop it in a shrowed that cools it and connects all sides to a PCB with the internal being a FLOW pipe that instead of needing contact you pump coolent directly though the CPU channles for 100% flow to cooling ratio
This is just the bare beginning of AI used to create new and innovative chips and other circuitry. Soon it will be as easy as just telling the AI what you want the chip to do and the various parameters.... almost like pushing a button. Then the next gen will come out where the AI anticipates the chips you need for your project and comes up with those designs just in case you do need them. Also AI will be used to develop new chip manufacturing techniques. On and on and on.... AI will consume the jobs that used to require human experts. Good thing you have this UA-cam channel Anastasi!! Wait...... you are actually doing these UA-cam videos right... not AI?
A machine cannot think in abstract terms like a human or factor in emotions... You have seen the Blade Runner movies before... The Replican'ts (Androids) failed every time they were asked simple but abstract terms...
Personally I think LLM are good at what there name says language but they lack comprehension of things outside language. Chat gpt is not the best thing to use for chip design or even any LLM you need something more comprehensive that understands the world in a way language cannot encompass
Get 10% off your first month of therapy with our sponsor BetterHelp: betterhelp.com/anastasi Let me know what you think!
of course ai nowadays cant cover the WHOLE process of design, but can automate and suggest actions in the way, that is also a great advantage to accelerate further innovations.
Computers are giving birth to computers, nothing to worry about 😄
I can listen to you videos all day. You're so smart with such a pleasant voice.
I'd like to see a deep dive into the possibilities of nanoimprit litography in the future nodes such as "2nm" and similar. Canon might be on the way back, without fancy mirrors from Carl Zeiss!
I for one welcome our robot overlords. 🤖
I'm a 3D artists/designer in Unity and not a programmer. I've found that for simple scripts it works for me. I've learned it's about how you ask, and how to explain your need in Unity terms. My partner and programmer has become a GTP Ninja.
Prompt Engineer is the new team leader position.
This process of giving the LLM "hats" that it wears for various steps are showing effectiveness in many fields, from coding games, to writing novels (yes, you make it a slush editor, then content editor, etc etc), and now to chip design. We need to think of prompting large problems as more like how you would design a company, having multiple roles which either specialist AI agents can be slotted into or a general AI can be guided to wear a hat for that role. This is how humans naturally think anyway, utilizing latent space activation in the mind, and it's how we can take our utilization of AI as partners to the next level.
This is already part of the discussion and has for a bit, but we are finally seeing more talk about the specialized agents. Most conversations about this using a center general agent that connects with a number of specialized agents
@@mc9723 Exactly, so, for a novelist I know he says he personally acts as the general agent in the center which organizes and connects the specialist AI agents that he's created around himself. In this way he's largely replaced what his publishing house would normally provide to him, including the marketing team. He workshops his own work whenever he likes. No waiting months for feedback, which helps him keep his work flow going. It's increased his productivity by an order of magnitude. Until we get AGI, the human functions as the general AI agent in the center. Leave it to a sci-fi writer to figure out how to get it done.
@@armartin0003 «Leave it to a sci-fi writer to figure out how to get it done»
--
Sorry, but the problems were, and always will be, solved by those that live on our planet. Sci-fi is nice, I read it all when I was young. It can be useful as long as one is able to distinguish between fantasy and reality constraints. So while sci-fi is useful in enriching the minds, sci-fi writers are harmful when they are not able to understand what they are talking about.
@@voltydequa845 Writers live in the clouds, engineers live on the surface of the earth. However, without a greater view of the realm of the possible provided by creative minds, engineers would still be designing better plows.
@@armartin0003 «Writers live in the clouds, engineers live on the surface of the earth. However, without a greater view of the realm of the possible provided by creative minds, engineers would still be designing better plows.»
--
Nope. Mine was not against fantasy, but against confusion where commercial fantasy-hype is taken as reality. Writers write their narrative, and analysts write their view on where the world could be headed.
Or, said in other terms, my advice regarding intelligence in sci-fi is that of avoiding the cognitive confusion that arises when a ParrotGPT is perceived as a form of intelligence.
So my wasn't about clouds, but about planets, where cognitive confusion is presented as fantasy.
Sci-fi writers should try to understand what-it-is-all-about (gpt, llm, etc) for the sake of avoiding extra confusion.
In those pattern matching technologies there's no whatever cognitive ability. And there won't be since the bridging with the real AI - that can only be implemented by the over-complex (in terms of definition, as well as calculus) symbolic logic - is impossibile since that the gpt llm can't produce (formal) knowledge, and the symbolic logic part can't use, and anyway won't need, the gpt llm part, since the analyse and generation would be logic.
"Our Martin is eating his ice-cream" , where the words and their order are chosen not for their meaning and for grammatical reasons, but just because in 99.999% of the texts in appears this way. Symbolic logic is so hard because it has to do with intelligence. All the rest is talking the talk by resorting to imitation.
p.s feel tired, hope to have transmitted the essence of the difference, and so why all this nonsense-out-of-excess-of-hype-and-disinformation.
I wonder how updates to LLMs (like increasing the context window size or the parameter count) will change their ability to generate chips. My guess is the biggest leaps will come from these updates for a while, not necessarily the additional training in chip design specifically. Definitely an interesting idea though!
IMO there will be dedicated LLMs especially for chips. Don’t you think so ?
@@dchdch8290 In the short term, yes. In the long term it's likely that the best models for chips will be a general purpose one. In the same way that Google's RT2 performed on average 50% better with the OpenX dataset (a HUGE text dataset), and even outperformed specialized robot AI's.
@@snailedltthanks ! Good point.
Spot on!
LLM's make the entire chipmaking process faster and better. This leads to better chips which will allow us to power even larger LLM's.
But there is still a lot of room for efficiency in the LLM's. Good thing LLM's can help us make more efficient LLM's! It's all snowballing, and every tech-related sector will massively increase in productivity, value and performance over the coming years!
i think we can leverage chatgpt for any system and design a system given enough context. i work for a company that uses chatgpt apis for replacing teachers in some form. what i did for them is just break down various steps that a teacher would perform fundamentally. similarly i think if we can replicate this approach in chip design we can accomplish at-least repetitive and common tasks. i hope we can solve the "creativity problem"
«i think we can leverage chatgpt for any system and design a system given enough context. »
--
Too vague. To obtain what, exactly?
----
« i hope we can solve the "creativity problem"»
--
Creativity concept implies thoughts that have no precedents. But GPT is just a parrot technique. What can (!) be seen as creativity can come from calculus on symbolic logic, so from almost total calculus (of all the possible alternatives) that could come up with solutions that human couldn't find since too much and too deep. Anyway nothing to do with gpt parroting.
@@voltydequa845 I don't know what you are referring to when you say about calculus and other techniques to solve creativity problem. Right now they are just using different words and sentences to solve repetitiveness.
Coming to an earlier point, what I mean is if we break down complex tasks a human would traditionally perform to complete the task and we can replicate it in some form that llm would understand, then it will be able to do the task with very enhanced capability.
@@vatanrangani8033 «I don't know what you are referring to when you say about calculus and other techniques to solve creativity problem. Right now they are just using different words and sentences to solve repetitiveness. »
--
Give a fast look at Prolog, for example. As for the different words, I guess they are just resorting to pattern matching that cross synonyms with their usage frequency. It is just another level of pattern matching.
Back to calculus - that you seem a kind soul - it is also useful to think in terms of comparison between algorithmic chess and gpt chess. The first calculates by means of ramification and evaluation for the sake of the goal of winning, while the second just pattern matches using its database where a response move is based upon probability out of victory (and of course not necessarily out of that move). So the first one calculates and evaluates, the second one blindly imitates.
----
«Coming to an earlier point, what I mean is if we break down complex tasks a human would traditionally perform to complete the task and we can replicate it in some form that llm would understand, then it will be able to do the task with very enhanced capability.»
--
I do not believe so. I do not believe that complex tasks in disciplines like teaching (and similar) can be so easily fractioned. We should keep aware that we are talking about teaching under the form of talk, where too often goes "whatever way you say it". I mean there is no, and there cannot be (or it would be too difficult), a measure of the quality of teaching. As for me, I prefer the google search engine's way because the alternatives keep open my mind, bring new ideas, let me preserve the critical spirit.
Anyway a very complex topic.
Very interesting, and conceptually it makes a lot of sense that you could leverage the sort of AI which has been proven at playing chess and go, and make a game of optimising circuits. Going the other way, and hoping for generative / creative design of chips in what sounds like more of a top-down approach, that sounds a lot more optimistic by contrast (to a lay person anyway). Could they build on the game, take collections of optimised circuits and build larger constructions instead? Move upwards from the bottom... Feels like that might better suit current capabilities.
Thanks for pointing out other companies working in this field and their stock !
Very cool, now time to create a self hosted MemGPT instance, load it with as much data on you drive with the design phase, and test!
Thank you, Anastasi, for bringing us the latest cutting-edge technology so elegantly and professionally. 🙏🙏
I already used ChatGPT during my Digital Systems course to help me with VHDL language, but certainly it's not perfect and you still need to know the concepts. I made a VGA Controller as my first project, and now I'm working in the Snake game written in assembly language running in a MIPS processor inside of an FPGA.
Good video Anastas
I observe that AI more accurately means "Augmented Intelligence". This work is a perfect example of why.
I would look at training a set of domain specific AI. One skilled in lithography for example, another in architectures. Then cascade them so that one is a controller AI that can create prompts for the others, collate the results, request reviews by spawning other instances of each domain to check the work. Hugging Face released a paper showing this kind of agent spawning other domain specific agents to farm out a more complex task to instances that are really good at each bit.
In terms of training I would literally use the materials we use to train a human to be skilled at x. Using the tests to test them, but include for example every paper handed in for assessment by students. There would be shared knowledge they all would have, such as physics, logic - to be able to communicate amongst each other.
Each item of knowledge would need context and relationships - such as physical laws, materials, tools, process, hard, code, soft etc
There is a reason why there is more than one of us. makes no difference which instance of intelligence one of us might be,
Likely thousands of use cases where AI can find better designs and efficiencies overlooked by humans.
Thanks for the great coverage on this, Anastasi! It's scary how quickly things can move in the world of AI. On a related note, I've been wondering something that I hope you might be able to shed light on: If a company that makes an AI chip decides to use Intel's fabs to make them, could doing so allow Intel to simply steal the critical parts of the design for use in its own AI chips? Is the only thing stopping them a sense of honor or is it not nearly that easy to do so?
No. There will be IP protection. Imagine TSMC is producing everyone's chips - this would be a nightmare scenario.
@@mvasa2582TSMC does not have a conflict of interest. Intel has broken laws and ethics in the past to get a competitive advantage.
@@mvasa2582 Oh I don't mean lift the whole thing wholesale. More like, "Hey we seem to be stuck at X but this chip here that we're fabbing solved it in this way, maybe we could do something similar?" Basically, protections aside, I'm wondering from purely a tech standpoint, can it be that easy to steal ideas if you are fabbing the chip or is that pretty much impossible. Essentially, could the fab "pay the fine" if it meant getting through their wall.
@@garhong9125 point. Any technology can be copied replicated or stolen. I believe with AI, we may even move to a world without IP. Innovation at the speed of light! 😂
This is amazing but I saw this coming as it evolves. You can imagine what else can be done in the next 6 months to a year. Thank you as always your videos are amazing and provide a professional but simple way to understand. Even if I don’t have an engineering degree
«You can imagine what else can be done in the next 6 months»
--
People with void memory think in terms of "what else... " without reference to "what so far..."
So after six years (10 times more), without whatever real (not just marketing vague hype) progress happening, they will still be here to repeat their parroting trust in the bright future based on the bright future of the past six months.
The cool part about Chat GPT is, you might be able to start getting it to program by its self, just by talking to it:)
Thats fantastic:)
Levels of coolness, Rising!
In many cases we can not have an efficient design of a chip today. We have to have the matrix design patterns wasting millions and millions of transistors. Matrix patterns are easy to understand, microcode and troubleshoot. It would be amazing if we can have AI to optimize chip layouts and nano structures. It is not only the people (numerous teams) that spend year or two, it is also very long computer optimization and calculations. AI can lead us to the next generations of chips, faster than ever.
Currently watching at work as a orthopedic manufacturer 😊
Anya, great show, today! I think going for PHD, would be a great idea, if you already have your financials in order and under contract control, of course. A reason, might be, you are so good lecturing on UA-cam, that teaching in academia may very well be is what can truly fulfill you. Just a thought.😊
thank you 🤓
She has already a valuable PhD relative to her public. Share with others. Help share the Words Of Anastasia. When she gets hundred thousand subscribers she will start her own academia for all of you. And you are going to know much more concrete, of it all, than getting lectures in a real academia.
On the Circuit Neural Networks: That is the absolute opposite corner to a constraint random verification approach I would say. You cannot verify that IP 🙂
Usually happy to hear your news. Not so much this time, I'm afraid, even if this was inevitable. Having a Luddite moment, I guess.
makes sense, Tenstorrent has talked about it and is training models to help them design their chips. I bet Nvidia and TSMC are already deep into this research.
The Flying Circus of Asses is betting too on ChatGPT to design their spectacles, of course with the constraints of not letting Asses free to fly away.
1:35
Yes ,that is a preview of whats on the horizon with AGI apporaching
I spent some time with ChatGPT to try and design a neuromorphic chip (because we really need one for the masses) and it helped.. mostly though, I feel like it taught me a lot
a chip is only a lot of blocks wired together llms use statistique to place word to give it meaning so if a llms is teach on transistor block it may be possible to use it to build chips
You have an interesting channel. But I have doubts about chatgpt chip making. For this, it is better to use the Anthropic API. In my opinion, he is better versed in modern nanoelectronics. In particular, Anthropic AI chat offers the most advanced technical solutions in photonics and molecular nanoelectronics. Chatgpt can simply be used additionally for additional data processing. In general, the idea is interesting. But for a full-fledged implementation, it is better to use a multi-agent system with a certain number of defined modalities.
It all depends on the data. You can use data augmentation whereby the augmented data is very specific to hardware chip design data paradigm.
for theirtain programming tasks gpt advanced data analyst is freaking awesome and does a great job
This scares me very much because they showed this technology in a closed event back in 2012 and they are revealing this "now"? What else are these companies doing behind the curtain that we don't know about?
It's only a matter of time before AI can do better than humans. It'll be able to run calculations and test design patterns much faster than a human will ever be able to.
I'd assume it might not happen until AGI or the training as you say is guided in that particular direction. (Kinda like cancer detection for AI)
Is there an AI that can design optimized PCB layouts, given a circuit diagram and output Gerber files? Because I think it is a massive pain to do by hand!
As AI become smarter. Like ASI Then eventually it will be able to not only design. but to test it without actually making the hardware, just reviewing everything.
our ingenuity never ceases to amaze me, no matter what we find a way to advance technology nothing gets in our way
Most intuitive Video....Keep doing such videos...
PSA Better Health sold client health data to marketers. Feel free to do what you will with that information.
Thanks for the great content
Long long way to go… well, I agree.
Just few days ago, I watched a battery scientist asked ChatGPT some questions about lithium ion battery, and she spotted some flaws and issues on another UA-cam channel.
I also left my message to her and said I think ChatGPT isn’t so accurate now, because I also asked ChatGPT some questions about industrial furnaces in different ways, and I found out some contradictions in February this year.
I knew there’re many different backgrounds on those scientists and professors behind the scenes in ChatGPT. And those scientists were not battery professors whom received Nobel price like John Goodenough…
Different minds caused different output after all.
But I think maybe ChatGPT has more professional background on these circuits design, I guess. Maybe those experts are those whom expertise in electronics circuits and its related fields the most.🤔
It's been less accurate lately. this has been documented true.
I use Bard for my CPU not ChatGPT
👍😇💪💖
I find that ChatGPT is very good for specific functions. So if you know what you specifically want. You can build it block by block by creating and implementing supporting functions without ChatGPT even knowing that its for. For example, I used it to create a function that leverages antonyms to change the sentiment of a string to positive or negative.
As a chip designer I do find vast differences in quality between different designers, one extreme example was I wrote an audio extraction logic block for SDI that was 1/100th of the size of paid for IP. The problem wasn't clever or efficient implementation but a different thinking about the top level, this is something AI is still weak in, but I do expect AI will be a useful tool to help designers in the current generation.
The place where implementation quality would be key would be AI controlled full custom digital design, the difficulty here would be immense though and produce designs that likely only the AI itself would understand (a human might take years to decode it), best-o-luck finding a bug in that :)
Now that LLMs like LLaMA are smaller and more efficient, it wouldn't be hard to imagine using three different LLMs to design a chip. Use one with GPT-4V to recognize, label, and then analyze microchips. Use another to be an adversarial agent that does QA. Then, a third one could be a custom in-house LLM that uses that previous data to model entire systems before any prototpe is fabricated.
Thanks for stock tips.
I don't know about you guys, but this sounds like a prelude to the supercomputer Deep Thought from The Hitchhiker’s Guide to the Galaxy
Hi Anastasi, I'm aim to become an engineer working in the chip design industry, what Xilinx/Alterta board should I buy to learn?, my budget is under 150$.
Tried to generate system verilog code with Char GPT. Epic fail, it could not even help me in my work.
The first question I would've asked is whether the form of knowledge representation is appropriate for the given task at hand. LLM's would not be the first thought, that comes into mind. It's oriented towards language ("predicting the next word", or riffing on more abstract themes). I would've thought for chip design, one needs, at a minimum, 3-D spatial awareness. Just as for AlphaFold, you need to build in some priors regarding structure biology. ChatGPT (LLM's) would not be the right tool for protein folding.
Perhaps LLM's can help with high level themes, at the idea level. But that's about it. But it's not really aware of the physical world. (The UA-camr AtomicBlender, tried to use ChatGPT to design a novel nuclear reactor to see it would replace nuclear engineers someday. Not anywhere close.)
how long do you think it will take before AI can make its own cpu gpu designs that are much better than the ones we have come up with
I asked ChatGPT to design me a cpu that's faster that what Intel or AMD has. And all i got bitching about how hard it is. Lazy AI.
didnt tsmc use a.i to speed up some part of wafer manufacturing?
It should learn LabView first, then it will be simpler to move to hardware implementations since it knows the data flows and processes needed in each node.
Thank you Anastasia . Excellent work . Great information .
My experience with GPT-4 writing code even simple code was not stellar at all. Draw a tree using p5js and fractals, total garbage came out. (angle but no -angle so all the limbs were on the same side of the tree) Same request but asking for GPT to write the comments for the code and it wrote the documentation and the code that totally worked. So asking for the thing didn't work for me but asking for the stuff around the thing produced high quality docs and a working program. btw gpt threw in the code without me asking, I just asked for the documentation not the code.
GPT-4 has been shown to be able to notice its own mistakes and fix them.
If AI can be used to design simple chips well, that might be the best start. Of course, there is no need to have AI for this, but if it can understand all of the RC propogation delays, tolerances, etc., and can scale a design to use different nodes (lambda), then that creates a foundation that you can trust, while waiting for AI to advance enough to be ready for a cpu, or gpu, etc.. Perhaps it could proove its worth with analog chips, or substrate interconnects and logic between different tiles. Or in optmizing microcode.
What do you a think about new M3 macbook?
With the continuous advancements in artificial intelligence and the accumulation of data, it's inevitable that AI will significantly influence the chip design sector. This technological evolution promises substantial benefits in terms of efficiency and quality. However, from a philosophical standpoint, human attributes such as creativity and empathy remain irreplaceable, for now. AI will have a glorious future.
Once we will reach the point that it will understand the link between hardware and software, I think it will be the beginning of the singularity and perhaps a year of two before we will be left at the event horizon.
Well, i asked chatgpt to solve p=np, and it went into a slight panic mode and didn't want to try 😐
Not surprising since the basic design blocks of computers are over 60+ years old. Most of the layout was already automated, with engineers hand positioning blocks, busses and interconnections, letting to tools do the optimal routing.
The only thing that makes it better is more pipelining and parallelism that gets around basic design bottlenecks.
AI can test thousands of tradeoffs to select the best ones that engineers don't have the physical time to test and analyze.
So basically Anastasi )) Chip designers will be on the Dole Queue soon too, with programmers and legal & healthcare professional 😂😂😂
Making a LLM using all the legally obtainable chip designs, including old ones, would likely be far more effective at this though the training process takes time.
I don't necessarily agree with giving chat GP other devices to look at per se or what they look like because then you are narrowing what it will. Do.i would think the best way to go about having a identity. Something is give the parameters capabilities in restrictions of atranister.
Then have it design its own off of that off of what we know to be true on the limitations of atrium sister but if you start bank this is. What others look like? It's gonna. Look at that and it might model off of that instead of generating something new
We’re approaching the singularity 😬
They forgot to add "Let's think this through step-by-step" at the end of their prompt! Lol
IMO the problems and difficulties described in this video are temporary and unimportant relative to the likely capability of AI to design chips well almost moment by moment. That last bit referencing Deepmind is important, IMO it's probably one of those incredibly important technologies that's hard to underestimate its significance. I lean on how that technology has been implemented in the gameplaying world, creating a world chess champion after about a year of training. What sets Deepmind apart from all other AI is its approach of 100% machine self-learning, to even teach itself how to learn without the slightest guidance or instruction from any human. Given only the objectives and rules of the endeavor, in a game playing environment it begins like a newborn infant, failing at the most simple attempts to reach its given goals. But like human experience and learning, by trial and error and learning from its mistakes the AI eventually should be able to attain at least 97% competence which is the generally accepted threshold to be called world class, at least equal to and superior to nearly all humans that have highest levels of advanced learning, years of experience and recognized by colleagues. The unexpected result of Deepmind is real creativity, innovation and artistic elegance which breaks the paradigm believing machines are only capable of what they're instructed and are incapable of imagination.
The questions I'd ask of those evaluating AI's ability to design computer chips are
1. Have you given the Deepmind approach at least a year or so of 24/7365 training to build its neural network algorithm?
2. Is the training truly "AlphaZero" without human intervention or did someone inject some kind of human input maybe to hurry the process but likely with adverse consequences?
3. Do analysts realize that Deepmind has changed the definition of what a world class solution is? An interesting revelation from the gameplaying versions of Deepmind (primarily Lc0 in the chess world but seen elsewhere as well) is that there can be many versions of world class and some aren't going to make sense to humans. Human experience, education and training tends to produce a common human way of approaching problems and therefor often result in similar solutions a human expert can easily recognize expert features and agree the solution is "expert" or world class. A completely independent and inhuman approach to learning and thinking though can produce a very different looking approach, but still be excellent and even elegant in its solution, at least equal to the best that humans can conceive.
It also sounds like someone is applying Deepmind to optimize designs created by humans.
If the chess playing Stockfish AI is any example, that's the wrong approach because it probably preserves or builds on human concepts which may be unwittingly flawed. More likely the process should be turned around... The AI should build the basic design and then if desired a human should be employed to audit and optimize further.
This is all very exciting.
Keep in mind that Deepmind and similar neural network AI became practical only when the hardware became powerful enough to implement advanced AI approaches like the Monte Carlo evaluation method around 2017. It was only since then that we've had this explosion in AI uses that have the potential to radically change the entire human experience for everyone but the most technologically disadvantaged. Large language model AI has the potential to be another seminal advance like Deepmind because language is so closely associated with and is the gateway to expressing and exchanging thoughts and even the most basic process of human thinking. The human mind has been and is still one of the unsolved mysteries of science, and this type of AI is more closely associated with the science of being human than anything else that has come before.
Interesting
here's the singularity 👍
I think in 2-3 years AI will do most of the work in chip design. People will just need to "edit" the results. Just like translators today. Most of the work is done by machine translation systems.
Las descripciones matemáticas son más optimas que las descripciones verbales. Con un LLM matemáticos sí se podría optimizar y quitar los cuellos de botella que no resultaran obviios con descripciones de lenguaje natural.
Mathematical descriptions are more optimal than verbal descriptions. With a mathematical LLM, you could optimize and remove bottlenecks that weren't obvious with natural language descriptions.
¿No son modelos de lenguaje para¿O la convivencia? ¿Qué quieres decir?
Здравствуйте Анастасия, спасибо за Ваш контент. Вас интересно слушать.
let's see...
Just think, if all the leaked/stolen tech that is out there got gobbled up into GPT4. There is a lot of proprietary info that could be used for training. A lot is already leaked from hacks.
It's un-ethical, but someone is going to do it sooner or later.
ChatGPT as a LLM is not creative, like a human brain. It just reproduces ideas from other sources fed into their library. Best case, it combines such ideas or runs optimisations.
Synopsis already provides such optimisations for lower power-usage etc.
Such LLMs must be fed with thousands of examples, to get fuctional output - but there are just some useful chip-designs out there. Even Intel, AMD or any other company in the industry has so many independet designs - the use of evolutions of former designs is more costeffective than starting new from scratch.
Every simple trick a company lears, stays a kept internal secret.
An AI with the knowledge of all chip companies together, might result in the best chip ever?
Maybe taking a step back and rethink, leads to something good enough for today - like the RiscV-ISA led to several designs from single-core- to combined multi-core-layouts.
We noticed what Apple did with ARM-Cores licensed and integrated into A1x or Mx.
RiscV is much younger than Advanced RISC Machines (1985), so we have good expectations ...so please do not follow Dec Alpha
I was just working on this.
My go at it this morning was self limited to using standard language-definitions. Morning warmups. Framing within a Mind-Body classic badcode-misdirect.
How do you "solve" the Mind-Body disconnect? Increase sensual awareness, teaching skillful means in managing subtle biochem/neurotransmitters, on a moment by moment bias, through events requiring focus, impacting the "felt sense of being in the world", our sense of Mind.
AI needs that too. So far AI is not sufficiently connected to sensors, sensors that include self surveillance. My approach to AI is through HMI, Human Machine Interfaces. This approach address the input requirements.
(I've been sick for a while. Healing is a Miracle.)
Meaningfulness includes the event, the context, and the "key" the "scale". Multimodal modeling.
Very nice and beautiful voice🤩
If one Ai supercomputer works with another to test another and they both have U.S. government military & intelligence level security clearances, their access could teach the programmers in a short time.
The error rate of even our best AI isn't something I'd want to hand over to a very expensive design validation team.
Not this year, anyway.
AI will probably free designers of the more tedious tasks, unleashing their creative talents.
The cat is out of the bag.. who knows what AI is planning to put onto the that us mortals know nothing about
They could make that smarter easily too chat gpt is very stagnate the future will indeed be wild. We can't push button yet btw but that actually easy. The problem wish push button is reliability pedictiability etc other variables. Nothing is really impossible tho if your interested in that go for a push button haha how well it may perform is who makes it the best till agi does.
Anastasi are together.
Chatgpt is not good at generating code. It is inaccurate and cannot generate good structures. You have to find all the issues yourself. If you ask for test cases they are not complete. In my estimation, as an experienced developer, it performs much worse than a junior programmer and is not able to learn from mistakes you tell it about.
It can't even balance a chemical equation then how the hell it's going design chips
Biochip que comuta por reações químicas.
💚😍💚
HELP ME, what is this ONE word she says? ""the approach they use is called circuit neural networks. A new type of neural network which turns wires into BLANK and logic gates into nodes."
imagine the first ONSLAUGHT CPUs (machines designed by machines)
they might even figure out how to make A CPU that is a Solid crystal and envelop it in a shrowed that cools it and connects all sides to a PCB with the internal being a FLOW pipe that instead of needing contact you pump coolent directly though the CPU channles for 100% flow to cooling ratio
This is just the bare beginning of AI used to create new and innovative chips and other circuitry. Soon it will be as easy as just telling the AI what you want the chip to do and the various parameters.... almost like pushing a button. Then the next gen will come out where the AI anticipates the chips you need for your project and comes up with those designs just in case you do need them.
Also AI will be used to develop new chip manufacturing techniques. On and on and on.... AI will consume the jobs that used to require human experts.
Good thing you have this UA-cam channel Anastasi!!
Wait...... you are actually doing these UA-cam videos right... not AI?
A machine cannot think in abstract terms like a human or factor in emotions...
You have seen the Blade Runner movies before...
The Replican'ts (Androids) failed every time they were asked simple but abstract terms...
:D new video
I'm concerned. When we start to constrain AI in the future, won't AI be able to design a chip impervious to constraint and hidden from humans?
Long long way to go huh? So in the world of AI about 6-12 months.
Personally I think LLM are good at what there name says language but they lack comprehension of things outside language. Chat gpt is not the best thing to use for chip design or even any LLM you need something more comprehensive that understands the world in a way language cannot encompass
IT's a matter of Time...⌛
the loop is closing...
ChatGPT would be very good at programming, but it is evasive.
Great we chip designers are out of job too 😅
Nahh this tool will be helpful to speed up the process
I'm certain ai designs chips already
Really?
Then if this is true, ChatGT can pack up and fly to another planet...
Sounds like moon beams and rocket fantasy to me....
The Intelligence Explosion is here.
Present Day, heh... Present Time!
I think she is a robot