Of course, the human brain takes 9 months to construct, around 16 years to reach full functionality, and requires about 33% downtime to function properly.
And can be completely destroyed in a few minutes of complete inactivity…. Such as stop breathing or have a heart attack…. Rebuilding that same brain, that experienced the unscheduled down time, can take another decade or so…. It would be nice to have a method, or understanding, of rebuilding all of those lost connections in the brain…. I’m open to ideas… 😃
Brain-like circuits sound like a great idea, and many people have always found it appealing. Once upon a time there were two legendary chip designers -- Federico Faggin (the father of Intel 4004, 8080 and then Z80) and Carver Mead (famous for the design methodology in "Introduction to VLSI" course which inspired fabless semiconductor industry). Back in 1986 the two have founded Synaptics -- precisely to develop brain-like chips. The name was an allusion to the synapses of biological neurons. They really tried, but could not make it work. However, the developed know-how allowed them to successfully pivot towards capacitive touch-pads.The company became quite famous, and their touch-pads were found in many laptop computers. Few people remember now that the touch-pads were not the plan, when Synaptics first started.
This was incredibly interesting, thank you! Neuromorphic engineering is something that isn’t much talked about - or maybe I am just not well informed. I also like that your voice is very calm and soft in this video perhaps this topic was more enjoyable to process than the fab war episodes. *Northpole had insane performance charts at the keynote, not sure why it isn’t big news.
What's wild is that neurochemical system in the brain and on top of that the several operating frequencies for various brain waves. The brain is really space efficient and uses a lot of layered systems to synthesize all its stimulus, internal state, decision making, and responses. It's plasticity, neural weighting, and glitches are fascinating too and all of this is done based on protein and dna based hardware which limits the thermal and frequency response. If we could get better 3D stacked and capillary cooled systems, we might be able to get close.
To call the brain space efficient is like calling the ocean big. It's efficient to an extent that's not replicable by modern technology. The packaging of biomolecules and living tissues are and will be, for the foreseeable future, wonders of physics that everyone can only look up to.
@@Mfields4517get outa here…you think Trump and MAGA are more aligned with a science and engineering based society? Freaking quacks all of em. I’ll take the lesser of two evils…the same people in the GOP are not the ones moving technology forward in society today or in the past
ran across a paper that tried to determine how complicated an artificial neural network would be needed just to simulate one neuron is about 5 to 8 layers - easy to find: A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC).
That was a TCN the layers equate to temporal context....the 5-8 layers simply represent the time windown needed to recognise a spike train. so not indicative of complexity. There are better approches which are more faithful to postsynaptic neuronal response which are quite small and efficient compared to the TCNN approach
I experienced a TBI in 2017, losing my identity and conscious access to memory. I was already neuroatypical and had seizures, but my capacity for 'linear communication' was often confounded. I knew words and understood when people spoke and could respond, but it became easier to write in symbols with a few terms peppered in for clarification. But i basically 'mapped out' my brain's rewiring process in real-time, and there's so much overlap in concepts and terminology--using the computer memory framework as a reference has helped me streamline my understanding and troubleshoot ongoing problems with my memory. Thanks for uploading such great and detailed information--it's been really helpful!
Very impressive recovery story Ingrid! I have had the good fortune of working with many people recovering from TBIs. Their recovery continues strongly for more than a decade… I hope this note finds you continuing on your journey! No two TBIs are the same, but…. Chances are… you have great memories still awaiting to be discovered. The harder you work at memory recall…. The more memories get re-connected! An impressive non-linearly rate of recovery… TBI 2011 😃
you come across as hypochondriac and presenting with some kind of personality disorder, perhaps narcissism, that I'm not qualified to speculate about, more so than someone that actually has brain damage.
A huge challenge is the interconnects. You could mimic the brain's massive many-to-many network by going 3D, but then run into heat dissipation problems on top of the manufacturing issues.
Even if they could build a brain-like computer it wouldn't be commercially viable due to the accuracy problem. Analog brains make mistakes all the time. We forget, we overlook, we assume. It is excused as "human error" and as humans we accept it. But we would never accept such flaws from a machine that we cannot even sue for damages.
just another hurdle in the way, with time and funding I'm sure this too will be solved. like you said it's "brain-like", but never will be a brain, only a machine
Yet we can still build airplanes, rockets and computers. Yes takes a lot of work and redundancies to make these possible with all these errors humans can make, but if it works it works
@@rizizum Yes, we use digital computers to airplanes, rockets and other computers because they are great tools for what the human brain doesn't do well.
@@RaverSnowLep It's a common mistake, while editing the voice lines, you might hear the start of a sentence, and to not lose time, you skip to the end of the sentence, so you might incorrectly assume that that line is correct because the start is correct and the end is correct. You'll not notice it repeats itself because you haven't heard the middle. The only way to avoid this is to hear the whole thing, which almost no one does. I think I remember hearing an Oblivion NPC in like 2006 saying something like "let me do that one again" between two identical sentences, you can find it on UA-cam.
I designed a neuromorphic processor back in 2004 when I was at The Neurosciences Institute. I could write a book on the neuromorphic paths one could take and the reasons they won't work. You did an excellent job of covering the different areas of the brain that need to be represented in a circuit. There are two aspects of neural computation that are paramount in creating brain-like functionality, but always get glossed over. They are synaptic plasticity and connectivity. Almost all of the changes in your brain that affect behavior are centered on the efficacy of synaptic propagation, likewise a lot of the functional differences between areas that have similar neuronal structure are the result of the connectivity within and between neural areas. The memristor is going nowhere - it's a red herring. If someone is successful in creating a commercially viable neuromorphic chip, they will encounter two problems that will stop them cold: 1) no one is trained to use them, as the computation is fundamentally different in everyway from a traditional computer; 2) intelligence is extremely complicated, not from an intellectual perspective, but from a conceptual perspective. The effort and education required to be able to use this type of chip and also make the conceptual leap to create an intelligent system is beyond anything available today..
would you agree that there might be research to do on something like a universal language of logic first? Let me explain. Right now, we have machine "language" at the center, the "ones and zeroes" everyone should know about, with several layers designed to make sense to humans - assembly first, low-level programming languages second, and finally high-level ones and scripts. What we do here is take computer logic and make it human-readable. I'd think it may come at a cost. I'd be inclined to say we may tackle the problem wrong right now. Don't get me wrong, the research is all valuable - there's lessons to be learned from all the attempts so far which will no doubt help down the road. But as so often in human nature, especially when I look at the tedium at office jobs, you get to ask: How do we even know what of all of this is truly neccessary? Is there something we overlook? What's the core problem, and did I communicate it correctly? Most times you can't even be sure, because our wirings are not even uniform. At work. Now try globally. Human cultures, male and female, go down to animals etc. One might do good in finding the Atom first, before we dive into how to do alchemy. "Intelligence" as it stands is fairly human-centric in its definition, or in plain terms we wouldn't know it if it bashed our face in right now because the term is quite old and refers to all kinds of clever outcomes. Stuff like "emotional intelligence" is a buzzword, but it's inference itself may lead us to the core of the problem here. Right now we all have a fuzzy interpretation of what kinds of machines we want to build with this. Maybe it's time to define the problem first?
@@ddlc_monika the problem that we are trying to solve is one intuitively known by everyone: we are trying to build a god. At its core, I think that the race for AI (including the need for better hardware for AI models) is about designing an entity that can solve all of our problems, an intelligent entity inspired by human intelligence but not bound by the limitations of the human condition (i.e being limited in ressources such as time, attention, biological needs, ...), a true embodiment of the philosophical notion of Logos.
There is a universal language of logic, it's called 'First Order Predicate Calculus' or FOPC. i think I understand what you are saying. If you are saying that we don't even have a good definition of intelligence - I agree with you. People use the words artificial intelligence to describe everything that doesn't have a straight linear relationship or has a unique response to a stimulus. The point I was trying to make about the conceptual complexity of intelligence is itself complex. People that haven't lived in the world of neural computation can't see it - they think it is just like neural nets. In truth neuromorphic is to neural networks like neural networks are to Von Neuman architecture. They may look similar but they are fundamentally different. @@ddlc_monika
Most people I've worked with have more modest asperations than becoming the spirt of the universe. Our goal was always discovery, of the mind and the brain and the mind/brain. I don't think anyone expects the intelligence clergy to come and bless their work. Few realize, as I discovered during my R&D work, that the test for intelligent systems isn't deterministic it is behavioral.@@Hollowed2wiz
1) llms can translate natural language into any code. Tranformers where made for translations. 2) Artificial intelligence nowdays is an emergent property of neural networks, it is not hard-coded symbolically.
For a long time I've had the idea that these system (neural computers and AI) will need some sort of random generator in the loop. Was amazed to hear that some of these systems actually use it. Just my gut feeling that true intelligence has to have some form of randomness in it, for it to produce novel new ideas and solutions to problems.
The lesson to take home is really that we have no great theory for why the brain works as it does, and thus we are guessing when we try to replicate it in hardware or software.
Sounds like we have a lot of the how's and I think we understand a lot of the why's. Trying to replicate it is helping to connect those dots. If anything it is deconstructing the longstanding theories that made it seem like magic.
In fact, everything is not as bad as it might seem. People understand a key concept of how the brain works - it is a huge set of nested conditions, which, due to their large nesting, can make an incredible number of matches (speaking as an AI student). The main thing that still remains a mystery is the details of the structure of the brain and the principles of its self-learning. This certainly does not indicate an almost complete lack of knowledge about the functioning of the brain; I would even say that the opposite statement is more accurate. We know more key concepts than we have left to learn.
This is very evident when you read about medications for depression and bipolar …. They all state, “while how the drugs works is unknown it’s suspected that …”
This was beyond interesting. I absolutely love the interests you have and out of the 100+ channels I subscribe to, you are one of three that I hit the bell icon for so I get a notification when you drop a new video. Keep up the amazing work dude. Love it all 🥰
I imagine in the future neuromorphic and Von Neumann architecture will work side-by-side, the former for adaptability and robustness, and the latter for precision and fast math
I think the brain is so efficient due to its size and nature of cells. Nerve cells are eessentialy small batteries, holding electrical potential and discharging it when told to do so by either primary or secondary neurotransmitters. And in this case the brain can be way more sophisticated, as each nerve cell much doesn't act as a logic gate, but rather only fires when sufficient or the right type of data is pass to it, only then will it fire. It basically acts as an impossibly complex neutral network, only we don't have any neural network technology I am aware of that can run on as little energy as the brain. We have transistors smaller than cells, but each cell also holds a bunch of data of the whole machine(DNA), and also depends on it metabolism, it's life span, etc... Basically a single cell acts a a mini-brain, being specialized in what it does but also having all the data in what everything does. I feel like the reason it's hard to figure out how neural networks solve a problem is the same reason it's so hard to study the brain. Each bit of information passed onto the brain is already spread across dozens of trillions of cells.
Anyone else rewind the video to make sure they heard him say the same thing twice? lol I was like, dang did I rewind by accident just at the right moment or what? 😂
Most of the current AIs i assume simmulate by machine learning and neural network principles to some extant how a brain works. In our own brains, there is no need for a simulation as the rules of the game are hardwired through physiognomy, chemical neurotransmitter and neuro elictricity ... physics chemistry biology brain. Not just like having different levels, but having different levels interacting with one other or multiple other levels in parallel. I would further assume, aslong the hardware stays just in silicon, there will always be a simulation aspect to it and thereby less energy efficiency as that efficiency comes from the varied interations between different levels of the brain. Am i mistaken?
Yes. There is no reason why silicon couldn't imitate or even surpass the performance of a biological neuron. A biological neuron calculates on the order of hundreds of impulses per second. A digital neuron, if built as an ASIC on cutting edge process node, could calculate on the order of _billions_ of impulses per second. The overwhelming majority of the complexity in a biological neuron is not necessary for the computing the neuron does, so you could really argue that there is a simulation aspect to our brain. Enormous quantities of molecules and proteins that have nothing to do with the logical operations, as opposed to a chip where only the substrate is not an active participant in calculation. At best they have immunological or other physical resilience purposes that an artificial brain wouldn't need to implement.
Neural networks don't actually simulate how a brain works, since a human brain is not trying to approximate a mathematical function, which is all a neural network is doing. An analogy would be an aeroplane; the aeroplane manages to fly, but it is not using even remotely the same mechanisms for flight as a bird does. The end result is similar, yes, and its designers were probably originally inspired by birds, but the way it is actually achieved could not be more different.
“Simulate” has been the implicit problem with AI. Just because you can argue that your CNN or AlexNet, whatever, appears to be accounting for, say, 70% of what you attribute to be a brain’s network response(e.g. the initial retinal network of a mouse), this does not mean your AI network is actually doing anything like the brain network - in fact you’re making assumptions on both ends - what the mouse network actually is, and your AI approach. This difficulty was already in full swing with Newell and Simon’s General Problem Solver, 1972, where they assumed that because the program’s steps were, say, 70% similar to the steps humans claimed to take in solving something like a symbolic logic proof, that this meant their simulation approach had validity, that it was actually near emulating the human brain/thought process - when nothing such was necessarily the case.
Thinking about it. Brains are complex, according to the brain. In reality it's quite simple but grand. Like how a single neuron may have so many connections that are rather simple in function but are so vast that it seems difficult to comprehend where to even start or how to even start. Recently they essentially made synthetic neurons "learn" to play pong, it's simply just a game of slowly reverse engineering how some aspects of the brain function and how this works in tandem with other functions. It's not a question of "if" but when. I say 50 years.
I don't think we know enough about the brain yet to understand if the non-digital aspects are needed to replicate it's power (probably needed if we want to literally simulate a human brain but that's not really the goal).
@@azhuransmx126 And even what they know is a drop in the ocean. The issue is that our tech is fundamentally different from how our body functions. It's like comparing a fish to a whale. Both live in water, but they couldn't be any more different.
@@azhuransmx126 We are fine the way we're going forward right now, with a few feature inspired from our nervous system and the like. Advancement in processing power and efficiency is our way forward, and not making a human brain computer, unless you're willing to overturn and overhaul all of the advancement we have gone though and don't forget the massive numbers of human experiments required.
@VanFire87 The area of the brain responsible for controlling offspring rearing in mammals is known as the limbic system, and within this system, the anterior cingulate cortex (ACC) plays an important role in regulating maternal behaviors. The ACC is located in the medial part of the frontal lobe and is connected to other areas of the limbic system, such as the hippocampus and amygdala. This brain area is involved in the regulation of emotions, decision making and evaluation of social stimuli, and its activity is believed to be essential for maternal motivation and care of offspring. In studies in mice, it has been shown that lesion or inactivation of the ACC can decrease maternal motivation and reduce the mother's response to signals from her offspring. The ablation of the ACC by electroscalpel makes the mother regress to a stage 200 million years more primitive or reptilian, seen in animals from the Carboniferous period 440 million years ago, mammals are from the (Permian Triassic) 230 million years ago. Scan that area, compile and digitize the algorithms that make it work, calibrate its neuronal weights, download the cortical model of the mother on an artificial substrate descended from current GPUs (probably photonic chips), reconnect its inputs and outputs and you will see how the mother returns to take care of her babies as if nothing had happened. Turn off the artificial neural network and the mother will revert to a reptilian state and her babies will starve to death. In the future General Artificial Intelligences will do a brain scan of animals in Biology starting from the most primitive beings (flatworms descended from Pycaia) to the most advanced (apes and humans). With these stored connectomes, a virtual twin of the biological being will be created that will be subjected to the same gradient, amplitude and frequency of inputs and if they react exactly the same (Intelligence) we will have created a digital twin. In the digital world copy and original lose all meaning, once something becomes Code and becomes Information it can be replicated millions of times and no one can say that they have the original download of a song. We have been digitizing everything, books, audio, paintings, etc., and the last thing we are going to be able to digitize is going to be the most important thing, our brain and not even we will achieve it, but the AIs will achieve it. Ray Kurzweil describes a similar procedure to realize the Mind Uploading in his book.
Brain might only take low watts most of the time, but it's been on, running, consistently for our entire lives. I think that's where consciousness emerges, and the first AGI will probably be lost with the speed we force it to learn, while creating the giant LLM matrices.
The brain isn't an analog device. It's a biological machine, something we have never even tried before. Describing it as analog or digital is fundamentally wrong. I don't know of a single machine that works on chemical signals, based completely upon water, is self replicating and self repairing, and have a hard age limit coded into them. Oddly enough, we went more and more electricity based with our technological innovation, while biological evolution did the complete opposite.
🎯 Key Takeaways for quick navigation: 00:03 💡 *Computers with Von Neumann architecture face limitations due to the Von Neumann bottleneck, restricting performance based on data and program memory access.* 01:01 ⚡ *The brain's efficiency, operating at low power (12-20 watts) with 86 billion neurons, inspires neuromorphic hardware development, aiming to replicate its capabilities.* 02:29 🔄 *The brain achieves exaflop-level computation through parallelism, unlike digital circuits. Lack of synchronization allows flexibility and resilience, using chaos as a feature for efficient learning.* 04:44 🧠 *Neurons, the brain's fundamental units, exhibit diversity in size, speed, and stimuli preferences. They form complex connections through synapses, contributing to computation and memory.* 09:11 🌐 *Silicon neurons, like IBM's TrueNorth and others, emulate neural behavior using traditional CMOS transistors. While scalable and low-power, they face challenges in energy efficiency, scalability, and replicating analog brain behavior.* Made with HARPA AI
This is insanely interesting, knowing that this much was tried by big names in the hardware industry, makes you think what might be going on with their RnD departments nowadays
Amazing! I am a TMS Patient you should look into this technology. It's MRI Strength Magnet that goes on my right side of my head and had honestly saved my life. Want to work on a book on my experience.
I remember reading about memristors in an ieee magazine back around 2010, I was a freshman taking intro ee classes. Even back then neural nets got a lot of attention and the article talked about how memristors could be used with neural nets, I remember thinking this was going to be huge, and then nothing for a decade. Cool to see it mentioned now in 2024. The future can come fast or it can come slow, but there's no stopping its arrival. What a time to be alive.
This reminds me of Pluto, where one of the roboticists, says that to create the “perfect” robot (the most like a human) you need to introduce chaos to its mind by flooding it with countless different personalities and strong emotions and let it bring order to the chaos, this may or may not end up creating a robot that is a monstrous psychopath but it will result in a robot most like a human for good or bad.
A company is using 100 lava lamps to introduce chaos for using in cryptography. Maybe we need to create a tiny chip with 100 micro lava lamps to create a particular personality, making it impossible to predict their actions. It always was a problem in computing science, how to generate random numbers from nothing.
Your description on how the brain processes data reminds me of OpenCog's Atomspace. That uses a mixture of different structures joined together in various ways to both represent knowledge and process data. It is a metagraph based system, where the brain (don't quote me on this) seems like a graph based system. It's weird to see a line of similarity between those two things.
That digital neuron could be a bit better by using DRAM instead of SRAM. Speed is obviously not needed, and DRAM use 8 times less space on the silicon than SRAM. The problem is that you need to keep refreshing it as its essentially a tiny capacitor.
I wonder, as neuromorphic computing develops what would software be like? If memory and computation can be combined, could the line between software and hardware also be blurred? In the brain, is there a difference between the substrate and its program?
Take a look at Susan McKinstry's work on moving away from von neumann architecture, its pretty cool and is another road for this type of work. She's a professor at penn state and has videos online of her research.
Good explanation of the differences. Using a digital approach we can only simulate a finite number of neurons. Hardly comparable to a brain. Perhaps the analog nature of Quantum computers might get closer, but they will be limited to even a smaller number of neurons. I think the current limit of qbits is less than 200.
We do not know how memories are stored or how the brain computes. We know synapsis have something to do with them we can destroy the synapses and the memory trace remains (Tomas Ryans work and also mentioned by Randy Gallistal).
Oh that's a neat extrapolation of sci-fi at the end. the Mullet joke? no. That was bad. I got a good flash of a cool Cybernetic Sci-Fi setting where instead of humans with cybernetic impacts, it's robots with memristor or IBM true-north type stuff to make them random more similar to biological life.
1:30 you could say the brain is the one example of a quantum computer in nature A parallel processor also dont need fast clock speed if it can access all of memory, while getting visual, auditory and tactile feedback simultaneously being processed While you simultaneously maintain a full conversation.... Everything the brain does is technically relativistic
What's the clock speed of my brain while on lsd and dmt? Probably can hit more exaflops too... Ever see what brain activity looks like on mri or brainwave scans? Definitely "100%" Brain activity So much brain activity parts of your brain that don't normally communicate start communicating Like conscious and unconscious parts of your brain matter Like a fpga style programming to brain? These drugs do cause nerve cell growth, but much more controlled than stimulant drugs like amphetamine or cocaine (which lead to issues..) and can reconfigure the internal "data bus" of your brain structure Don't forget the spinal cord is also part of the brain and is part of the unconscious reflex system
You answered the question in the video. The brain is only capable of achieving that low of a power consumption due to it making mistakes. Computers take very high amounts of power, but they are deterministic
Memory bottleneck is more serious problem than moore's law, a senior dev in my team once wrote a small C code to show how memory bandwidth limited modern CPUs are, the code would just read serially from an array of 1 Billion 4 Byte integers and time it with elements parsed per ms, on testing even a single ZEN 2 core would saturate 40GB/s dual-channel 3200 DDR4, let alone 8 cores, its almost magic how CPUs and GPUs can still work so fast being so bandwidth starved, specially on servers.
nice video. Repeats 2x @13:55: "and lastly the brain is an analog device, digital devices are an ill fit for replicating their behavior so we need to incorporate the analog element, which limits the systems flexibility"
As a software developer, something that always seemed weird about robots and AI in science fiction was how many were described as having electronics dedicated to various functions, like an "emotion chip" or "justice circuit." I thought it would make much more sense for them to use general-purpose computers were all of these features were programmed in software. Having "algorithms" or "subroutines" implementing these human qualities would sound more realistic. The terms used to describe the actual design was usually just arbitrarily picked by the author and not relevant to the story, except maybe as a plot device where individual hardware failures or lack of parts availability could render a subset of a robot's "personality" nonfunctional without affecting the rest. Now that AI is being developed and commercialized more it turns out some of these functions do in fact make more sense to implement with specialized hardware.
It's an easy and safe plot device because it's modular, plug, and play. If the "improvement" turns out to be shit on the ratings, the cast can easily take it without rewiring everything inside the character that is a robot/ai.
Emotions in the brain are mediated by neurochemicals release in response to conceptual interpretation of stimulus. A typical person will recognize a facial a d environmental actor stimulus like a family member and their brain will send an oxytocin response from interacting with them which will strengthen that neural pathway with the internal conceptualizations of comfort and safety if the actions and the interpretation of the external person's action reinforce it.
If neuromorphic hardware were to be built in solid-state-chips, could they mimic neurons ability to grow, die, multiply, and morph their shape? How important are these aspects for the brains ability to compute?
This really helps explain intel’s pivot towards NPUs for me thank you for making the vid. Like of course AI is the current trend but this really helps show how it can be more than just a trend.
The truth is because researchers have not manage to implement quantum pathways into hardware. Random chaos? Check out the recent discovery that quantum mechanism is found to be key in photosynthesis. This area of research to make brain like circuits requires quantum biology
Thanks for the great view, as always, on the subject. In electronics academia we talk only about this and to be fair im kinda tired, but your opinions are very refreshing
Making brain computers is quite easy. Its called having a child. Aka a new brain cooperation partner. Costs extremely little too. So little it was even possible tens of thousands of years ago.
@@rizizum Costs almost nothing to train. Mimics by itself. In fact the most successful schooling systems are mainly powered by children interacting with each other, like the ACT system. Very automated and very scalable when done properly.
I have a lecture in 1999 at Xiamen Da Xue in neuro-networks to grad students and some faculty. I was not yet an academic but was teaching computer science. The lecture bombed. And I was told by a kind open minded discrete mathematician on the faculty that my lecture would be better received in the medical school. No one was interested.
Its sort of ironic that this lovely peep into brain analogs has its own Matrix moment - The cat walk past the door - Twice. Look for that around 14:21 Did you do that on purpose ? Always fantastic quality !!
There is a publication in 1988 Biological Cybernetics to mimic neuron with an AND gate, so a digital circuit could fit afterall? How could people missed it for more than 35 years?
Good video. I thought the references you made were a smart inclusion because I thought for sure they’d be great engagement bait, but I don’t see any comments about them. I suppose in the only idiot here so I’ll say, man i love It’s Always Sunny. Also quite funny you used the avengers meme quote, quite randomly.
How is the RNG done for IBM TrueNorth? How many RNG modules effect how many and which components? It seems like it would be one of the better candidates for a sentient AI, yet it looks like nobody has tried maybe eeking out some sentience that may nudge the RNG to exhibit some agency one way or another? An Aaronson Oracle can predict if a human or code is trying to win at rock paper scissors, perhaps this test could show if TrueNorth is behaving more like a human if programmed to play optimally. If it drifts in a chaotic way like humans do this could be a test for sentience or agency.
As yet, there does not seem to be much evidence for the brain using quantum mechanical effects in any computation. So, if you want the hardware to resemble the mechanics of the brain, quantum computers don’t really seem like the thing to do (given current understanding). (Also, the quantum error correcting codes developed so far aren’t good enough given current hardware to get error correction to break even, I think.)
@@drdca8263 There's also not much evidence that the brain use "computation" in the classical sense at all, it is just an utility word we use to represent the combined goal-directed apparent activity of neurons. But we do know that our senses have something to do with the quantum mechanics in a sense, and we do have (limited and specific) evidence about some (possible) quantum interactions of the brain, such as the quantum smell experiments. And we have some more limited evidence that it appears to be the case that quantum mechanics and indeterminism have something to do even with unicellular cells (like the Martin Heisenberg work on the topic). So while this is mostly theoretical and have limited evidence (as any attempt to explain the brain and consciousness in general) it is far from being something absurd or not grounded in anything at all.
@@diadetediotedio6918 “but we do know that the senses have something to do with quantum mechanics in a sense”? Huh? And yes, I did waffle on whether to say “computation” or “multi-cell correlation of behaviors” or whatever. Still, the point remains: if we want to do computation on devices that work physically in brain-inspired ways, short of “eventually maybe we could simulate the cells at a molecular level”, we don’t have anything suggesting any *particular* way that a quantum computer would be well-suited for that.
The Von Neumann bottleneck is not the reason why neural computing is popular. It is pretty easy to make hardware not based on that model. It is because neurons take several deep learning layers to simulate just what we know about neurons ability to communicate, and we still know very little.
Not all white papers are legit… I looked at the 1971 memristor white paper and it seems like it’s reaching for greatness like it wants to be first to coin the term, in case someone else comes up with a working system. At the time integrated circuits and microprocessors were brand new cutting edge stuff. We had the naive opinion that the human brain could be replicated as AI (Hal) with this (new for the time) technologies. This paper though interesting historically seems to be more about publishing papers as a requirement to maintain academic status than pushing the state of the art forward. I could be horribly wrong but after 52 years nothing came of it. Not even by the Soviets or China. Yeah, I was around then…🤣
And lastly, the brain is an analog device.
And lastly, the brain is an analog device.
At that point I thought someone had failed a Turing Test.
When your professor is checking to see if you're paying attention.
Attention paid.
i thought i was having a brain aneurysm
I checked twice to see if I missed a point, or if Jon missed a point! 😃
Of course, the human brain takes 9 months to construct, around 16 years to reach full functionality, and requires about 33% downtime to function properly.
And can be completely destroyed in a few minutes of complete inactivity…. Such as stop breathing or have a heart attack….
Rebuilding that same brain, that experienced the unscheduled down time, can take another decade or so….
It would be nice to have a method, or understanding, of rebuilding all of those lost connections in the brain….
I’m open to ideas…
😃
The downtime is on the user end, the brain is still working though
@@isshollandwrong. the body never has downtime, it just has concurrent repair time
@@AC-jk8wqpsychedelics
There is no downtime, its constantly at work.
Brain-like circuits sound like a great idea, and many people have always found it appealing. Once upon a time there were two legendary chip designers -- Federico Faggin (the father of Intel 4004, 8080 and then Z80) and Carver Mead (famous for the design methodology in "Introduction to VLSI" course which inspired fabless semiconductor industry).
Back in 1986 the two have founded Synaptics -- precisely to develop brain-like chips. The name was an allusion to the synapses of biological neurons. They really tried, but could not make it work. However, the developed know-how allowed them to successfully pivot towards capacitive touch-pads.The company became quite famous, and their touch-pads were found in many laptop computers. Few people remember now that the touch-pads were not the plan, when Synaptics first started.
Faggin lolol
Oooh so that's where the name comes from. Always wondered that.
they still do
Is it me or did Lots of interesting stuff happened in 1986 🙃
That's an unfortunate last name
They're hard? I thought they'd be soft, maybe a bit squish.
Hard, but surprisingly elastic! The average human brain makes an excellent emergency trampoline if pancake smooshed (sorry for the technical language)
It depends on preparation id imagine
Exactly what I was gonna say lol
I think you’re being difficult 😉
Its not that difficult anyway, My mother made one like 28 years ago
This was incredibly interesting, thank you! Neuromorphic engineering is something that isn’t much talked about - or maybe I am just not well informed.
I also like that your voice is very calm and soft in this video perhaps this topic was more enjoyable to process than the fab war episodes.
*Northpole had insane performance charts at the keynote, not sure why it isn’t big news.
What's wild is that neurochemical system in the brain and on top of that the several operating frequencies for various brain waves. The brain is really space efficient and uses a lot of layered systems to synthesize all its stimulus, internal state, decision making, and responses. It's plasticity, neural weighting, and glitches are fascinating too and all of this is done based on protein and dna based hardware which limits the thermal and frequency response. If we could get better 3D stacked and capillary cooled systems, we might be able to get close.
To call the brain space efficient is like calling the ocean big. It's efficient to an extent that's not replicable by modern technology. The packaging of biomolecules and living tissues are and will be, for the foreseeable future, wonders of physics that everyone can only look up to.
I thought Huang's law meant you need to pay 2X for your GPU every 4 years 🤔
i think it was 2x for the amount of memory lane width in bits. Fortunately we have DDR manufacturers increasing the bandwidth.
That would be Biden’s law
@@Mfields4517get outa here…you think Trump and MAGA are more aligned with a science and engineering based society? Freaking quacks all of em. I’ll take the lesser of two evils…the same people in the GOP are not the ones moving technology forward in society today or in the past
@@Mfields4517 these two laws working together
If you're planning on bringing this one public, just FYI, the patreon plug is muted.
Got it. Just edited it out for this one.
2 months out? Wow, that's quite the lead time.
@IAmPattycakes found the OG stan😅
ran across a paper that tried to determine how complicated an artificial neural network would be needed just to simulate one neuron is about 5 to 8 layers - easy to find: A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC).
That was a TCN the layers equate to temporal context....the 5-8 layers simply represent the time windown needed to recognise a spike train. so not indicative of complexity. There are better approches which are more faithful to postsynaptic neuronal response which are quite small and efficient compared to the TCNN approach
I experienced a TBI in 2017, losing my identity and conscious access to memory. I was already neuroatypical and had seizures, but my capacity for 'linear communication' was often confounded. I knew words and understood when people spoke and could respond, but it became easier to write in symbols with a few terms peppered in for clarification. But i basically 'mapped out' my brain's rewiring process in real-time, and there's so much overlap in concepts and terminology--using the computer memory framework as a reference has helped me streamline my understanding and troubleshoot ongoing problems with my memory.
Thanks for uploading such great and detailed information--it's been really helpful!
🙄
Very impressive recovery story Ingrid!
I have had the good fortune of working with many people recovering from TBIs.
Their recovery continues strongly for more than a decade…
I hope this note finds you continuing on your journey!
No two TBIs are the same, but…. Chances are… you have great memories still awaiting to be discovered. The harder you work at memory recall…. The more memories get re-connected! An impressive non-linearly rate of recovery…
TBI 2011 😃
you come across as hypochondriac and presenting with some kind of personality disorder, perhaps narcissism, that I'm not qualified to speculate about, more so than someone that actually has brain damage.
Thats cool
A huge challenge is the interconnects. You could mimic the brain's massive many-to-many network by going 3D, but then run into heat dissipation problems on top of the manufacturing issues.
Liquid (Blood) cooling baby
to make matters worse, those links need to be tuned like an analog circuit
in-chip capillary cooling!
That explains why the brain is so darn power efficient, because otherwise it would just be a glowing orb.
heat is not such a problem when you run with ~10kHz, but I agree that network design is very critical for such systems.
Even if they could build a brain-like computer it wouldn't be commercially viable due to the accuracy problem. Analog brains make mistakes all the time. We forget, we overlook, we assume. It is excused as "human error" and as humans we accept it. But we would never accept such flaws from a machine that we cannot even sue for damages.
just another hurdle in the way, with time and funding I'm sure this too will be solved. like you said it's "brain-like", but never will be a brain, only a machine
Yet we can still build airplanes, rockets and computers. Yes takes a lot of work and redundancies to make these possible with all these errors humans can make, but if it works it works
@@rizizum Yes, we use digital computers to airplanes, rockets and other computers because they are great tools for what the human brain doesn't do well.
@@ethans4783 and what the fuck is a brain if not a very complex meat machine?
Ruler was fine just the way he was. Did he need a brain chip. I believe consciousness is outside of all the hoopla. Two different highways
13:57 repeating voice line.
Im seeing this so often in videos now, whats going on?
@@RaverSnowLep An AI bug? :)
@@RaverSnowLep It's a common mistake, while editing the voice lines, you might hear the start of a sentence, and to not lose time, you skip to the end of the sentence, so you might incorrectly assume that that line is correct because the start is correct and the end is correct. You'll not notice it repeats itself because you haven't heard the middle.
The only way to avoid this is to hear the whole thing, which almost no one does. I think I remember hearing an Oblivion NPC in like 2006 saying something like "let me do that one again" between two identical sentences, you can find it on UA-cam.
I designed a neuromorphic processor back in 2004 when I was at The Neurosciences Institute. I could write a book on the neuromorphic paths one could take and the reasons they won't work. You did an excellent job of covering the different areas of the brain that need to be represented in a circuit. There are two aspects of neural computation that are paramount in creating brain-like functionality, but always get glossed over. They are synaptic plasticity and connectivity. Almost all of the changes in your brain that affect behavior are centered on the efficacy of synaptic propagation, likewise a lot of the functional differences between areas that have similar neuronal structure are the result of the connectivity within and between neural areas. The memristor is going nowhere - it's a red herring. If someone is successful in creating a commercially viable neuromorphic chip, they will encounter two problems that will stop them cold: 1) no one is trained to use them, as the computation is fundamentally different in everyway from a traditional computer; 2) intelligence is extremely complicated, not from an intellectual perspective, but from a conceptual perspective. The effort and education required to be able to use this type of chip and also make the conceptual leap to create an intelligent system is beyond anything available today..
would you agree that there might be research to do on something like a universal language of logic first? Let me explain.
Right now, we have machine "language" at the center, the "ones and zeroes" everyone should know about, with several layers designed to make sense to humans - assembly first, low-level programming languages second, and finally high-level ones and scripts. What we do here is take computer logic and make it human-readable.
I'd think it may come at a cost.
I'd be inclined to say we may tackle the problem wrong right now. Don't get me wrong, the research is all valuable - there's lessons to be learned from all the attempts so far which will no doubt help down the road.
But as so often in human nature, especially when I look at the tedium at office jobs, you get to ask: How do we even know what of all of this is truly neccessary? Is there something we overlook? What's the core problem, and did I communicate it correctly? Most times you can't even be sure, because our wirings are not even uniform. At work.
Now try globally. Human cultures, male and female, go down to animals etc.
One might do good in finding the Atom first, before we dive into how to do alchemy. "Intelligence" as it stands is fairly human-centric in its definition, or in plain terms we wouldn't know it if it bashed our face in right now because the term is quite old and refers to all kinds of clever outcomes. Stuff like "emotional intelligence" is a buzzword, but it's inference itself may lead us to the core of the problem here.
Right now we all have a fuzzy interpretation of what kinds of machines we want to build with this. Maybe it's time to define the problem first?
@@ddlc_monika the problem that we are trying to solve is one intuitively known by everyone: we are trying to build a god. At its core, I think that the race for AI (including the need for better hardware for AI models) is about designing an entity that can solve all of our problems, an intelligent entity inspired by human intelligence but not bound by the limitations of the human condition (i.e being limited in ressources such as time, attention, biological needs, ...), a true embodiment of the philosophical notion of Logos.
There is a universal language of logic, it's called 'First Order Predicate Calculus' or FOPC. i think I understand what you are saying. If you are saying that we don't even have a good definition of intelligence - I agree with you. People use the words artificial intelligence to describe everything that doesn't have a straight linear relationship or has a unique response to a stimulus. The point I was trying to make about the conceptual complexity of intelligence is itself complex. People that haven't lived in the world of neural computation can't see it - they think it is just like neural nets. In truth neuromorphic is to neural networks like neural networks are to Von Neuman architecture. They may look similar but they are fundamentally different. @@ddlc_monika
Most people I've worked with have more modest asperations than becoming the spirt of the universe. Our goal was always discovery, of the mind and the brain and the mind/brain. I don't think anyone expects the intelligence clergy to come and bless their work. Few realize, as I discovered during my R&D work, that the test for intelligent systems isn't deterministic it is behavioral.@@Hollowed2wiz
1) llms can translate natural language into any code. Tranformers where made for translations. 2) Artificial intelligence nowdays is an emergent property of neural networks, it is not hard-coded symbolically.
For a long time I've had the idea that these system (neural computers and AI) will need some sort of random generator in the loop. Was amazed to hear that some of these systems actually use it.
Just my gut feeling that true intelligence has to have some form of randomness in it, for it to produce novel new ideas and solutions to problems.
@user-up1id5rv2m yes, I think it does
Not randomness but chaos. If you think that idea is interesting, then you should definitely look into cybernetics.
Really great content!
The lesson to take home is really that we have no great theory for why the brain works as it does, and thus we are guessing when we try to replicate it in hardware or software.
Sounds like we have a lot of the how's and I think we understand a lot of the why's. Trying to replicate it is helping to connect those dots. If anything it is deconstructing the longstanding theories that made it seem like magic.
In fact, everything is not as bad as it might seem. People understand a key concept of how the brain works - it is a huge set of nested conditions, which, due to their large nesting, can make an incredible number of matches (speaking as an AI student). The main thing that still remains a mystery is the details of the structure of the brain and the principles of its self-learning. This certainly does not indicate an almost complete lack of knowledge about the functioning of the brain; I would even say that the opposite statement is more accurate. We know more key concepts than we have left to learn.
@@lit1041indeed, the only real issue left is the memory/storage problem. Where woo tends to happen 😂
@@lit1041 I'd say the main mystery is figuring out how subjectivity works, and we haven't made any real progress on that.
This is very evident when you read about medications for depression and bipolar …. They all state, “while how the drugs works is unknown it’s suspected that …”
This was beyond interesting. I absolutely love the interests you have and out of the 100+ channels I subscribe to, you are one of three that I hit the bell icon for so I get a notification when you drop a new video.
Keep up the amazing work dude. Love it all 🥰
Thanks!
I imagine in the future neuromorphic and Von Neumann architecture will work side-by-side, the former for adaptability and robustness, and the latter for precision and fast math
That what current AI are, just not of hardware but software shape
I think the brain is so efficient due to its size and nature of cells. Nerve cells are eessentialy small batteries, holding electrical potential and discharging it when told to do so by either primary or secondary neurotransmitters. And in this case the brain can be way more sophisticated, as each nerve cell much doesn't act as a logic gate, but rather only fires when sufficient or the right type of data is pass to it, only then will it fire. It basically acts as an impossibly complex neutral network, only we don't have any neural network technology I am aware of that can run on as little energy as the brain.
We have transistors smaller than cells, but each cell also holds a bunch of data of the whole machine(DNA), and also depends on it metabolism, it's life span, etc... Basically a single cell acts a a mini-brain, being specialized in what it does but also having all the data in what everything does. I feel like the reason it's hard to figure out how neural networks solve a problem is the same reason it's so hard to study the brain. Each bit of information passed onto the brain is already spread across dozens of trillions of cells.
The quality of your videos is outstanding, man.
That mullet joke? Chef's kiss XD
13:57
You turned A.I for a moment 🤖 😁
Mullet computing is the future!
Would that elevate the status of North American hockey players? Or bass tournament anglers in The South? Asking for a friend.... 😉✌️
13:56 vs 14:08 a glitch in the matrix
Anyone else rewind the video to make sure they heard him say the same thing twice? lol I was like, dang did I rewind by accident just at the right moment or what? 😂
yes!
Incredibly compelling video. Learned a lot. Thanks as always
Most of the current AIs i assume simmulate by machine learning and neural network principles to some extant how a brain works. In our own brains, there is no need for a simulation as the rules of the game are hardwired through physiognomy, chemical neurotransmitter and neuro elictricity ... physics chemistry biology brain. Not just like having different levels, but having different levels interacting with one other or multiple other levels in parallel.
I would further assume, aslong the hardware stays just in silicon, there will always be a simulation aspect to it and thereby less energy efficiency as that efficiency comes from the varied interations between different levels of the brain. Am i mistaken?
Yes. There is no reason why silicon couldn't imitate or even surpass the performance of a biological neuron. A biological neuron calculates on the order of hundreds of impulses per second. A digital neuron, if built as an ASIC on cutting edge process node, could calculate on the order of _billions_ of impulses per second. The overwhelming majority of the complexity in a biological neuron is not necessary for the computing the neuron does, so you could really argue that there is a simulation aspect to our brain. Enormous quantities of molecules and proteins that have nothing to do with the logical operations, as opposed to a chip where only the substrate is not an active participant in calculation. At best they have immunological or other physical resilience purposes that an artificial brain wouldn't need to implement.
@@cerebralmand yet, the brain needs over 3300 types of neurons.
Neural networks don't actually simulate how a brain works, since a human brain is not trying to approximate a mathematical function, which is all a neural network is doing. An analogy would be an aeroplane; the aeroplane manages to fly, but it is not using even remotely the same mechanisms for flight as a bird does. The end result is similar, yes, and its designers were probably originally inspired by birds, but the way it is actually achieved could not be more different.
“Simulate” has been the implicit problem with AI. Just because you can argue that your CNN or AlexNet, whatever, appears to be accounting for, say, 70% of what you attribute to be a brain’s network response(e.g. the initial retinal network of a mouse), this does not mean your AI network is actually doing anything like the brain network - in fact you’re making assumptions on both ends - what the mouse network actually is, and your AI approach. This difficulty was already in full swing with Newell and Simon’s General Problem Solver, 1972, where they assumed that because the program’s steps were, say, 70% similar to the steps humans claimed to take in solving something like a symbolic logic proof, that this meant their simulation approach had validity, that it was actually near emulating the human brain/thought process - when nothing such was necessarily the case.
Repeats on 14:20
Thinking about it. Brains are complex, according to the brain. In reality it's quite simple but grand. Like how a single neuron may have so many connections that are rather simple in function but are so vast that it seems difficult to comprehend where to even start or how to even start. Recently they essentially made synthetic neurons "learn" to play pong, it's simply just a game of slowly reverse engineering how some aspects of the brain function and how this works in tandem with other functions. It's not a question of "if" but when. I say 50 years.
Most likely the brain is nothing more than a transceiver.
I don't think we know enough about the brain yet to understand if the non-digital aspects are needed to replicate it's power (probably needed if we want to literally simulate a human brain but that's not really the goal).
You of course don't know, but there are people out there that knows much much much more than you.
@@azhuransmx126 And even what they know is a drop in the ocean. The issue is that our tech is fundamentally different from how our body functions. It's like comparing a fish to a whale. Both live in water, but they couldn't be any more different.
@@VanFire87 aircrafts and birds, they don't need to work the same, to do..... the same and better.
@@azhuransmx126 We are fine the way we're going forward right now, with a few feature inspired from our nervous system and the like. Advancement in processing power and efficiency is our way forward, and not making a human brain computer, unless you're willing to overturn and overhaul all of the advancement we have gone though and don't forget the massive numbers of human experiments required.
@VanFire87 The area of the brain responsible for controlling offspring rearing in mammals is known as the limbic system, and within this system, the anterior cingulate cortex (ACC) plays an important role in regulating maternal behaviors. The ACC is located in the medial part of the frontal lobe and is connected to other areas of the limbic system, such as the hippocampus and amygdala. This brain area is involved in the regulation of emotions, decision making and evaluation of social stimuli, and its activity is believed to be essential for maternal motivation and care of offspring. In studies in mice, it has been shown that lesion or inactivation of the ACC can decrease maternal motivation and reduce the mother's response to signals from her offspring.
The ablation of the ACC by electroscalpel makes the mother regress to a stage 200 million years more primitive or reptilian, seen in animals from the Carboniferous period 440 million years ago, mammals are from the (Permian Triassic) 230 million years ago. Scan that area, compile and digitize the algorithms that make it work, calibrate its neuronal weights, download the cortical model of the mother on an artificial substrate descended from current GPUs (probably photonic chips), reconnect its inputs and outputs and you will see how the mother returns to take care of her babies as if nothing had happened. Turn off the artificial neural network and the mother will revert to a reptilian state and her babies will starve to death. In the future General Artificial Intelligences will do a brain scan of animals in Biology starting from the most primitive beings (flatworms descended from Pycaia) to the most advanced (apes and humans). With these stored connectomes, a virtual twin of the biological being will be created that will be subjected to the same gradient, amplitude and frequency of inputs and if they react exactly the same (Intelligence) we will have created a digital twin. In the digital world copy and original lose all meaning, once something becomes Code and becomes Information it can be replicated millions of times and no one can say that they have the original download of a song. We have been digitizing everything, books, audio, paintings, etc., and the last thing we are going to be able to digitize is going to be the most important thing, our brain and not even we will achieve it, but the AIs will achieve it. Ray Kurzweil describes a similar procedure to realize the Mind Uploading in his book.
Brain might only take low watts most of the time, but it's been on, running, consistently for our entire lives. I think that's where consciousness emerges, and the first AGI will probably be lost with the speed we force it to learn, while creating the giant LLM matrices.
"Lastly, the brain is an analog device"
No, because the video mentioned that part twice in immediate succession 🤣
The brain isn't an analog device. It's a biological machine, something we have never even tried before. Describing it as analog or digital is fundamentally wrong. I don't know of a single machine that works on chemical signals, based completely upon water, is self replicating and self repairing, and have a hard age limit coded into them. Oddly enough, we went more and more electricity based with our technological innovation, while biological evolution did the complete opposite.
🎯 Key Takeaways for quick navigation:
00:03 💡 *Computers with Von Neumann architecture face limitations due to the Von Neumann bottleneck, restricting performance based on data and program memory access.*
01:01 ⚡ *The brain's efficiency, operating at low power (12-20 watts) with 86 billion neurons, inspires neuromorphic hardware development, aiming to replicate its capabilities.*
02:29 🔄 *The brain achieves exaflop-level computation through parallelism, unlike digital circuits. Lack of synchronization allows flexibility and resilience, using chaos as a feature for efficient learning.*
04:44 🧠 *Neurons, the brain's fundamental units, exhibit diversity in size, speed, and stimuli preferences. They form complex connections through synapses, contributing to computation and memory.*
09:11 🌐 *Silicon neurons, like IBM's TrueNorth and others, emulate neural behavior using traditional CMOS transistors. While scalable and low-power, they face challenges in energy efficiency, scalability, and replicating analog brain behavior.*
Made with HARPA AI
Takk!
This is insanely interesting, knowing that this much was tried by big names in the hardware industry, makes you think what might be going on with their RnD departments nowadays
Amazing! I am a TMS Patient you should look into this technology. It's MRI Strength Magnet that goes on my right side of my head and had honestly saved my life. Want to work on a book on my experience.
at 14:11 it seems you rendered the same scene twice
Hahaha never heard someone compare computers and the mullet. You are on another level. Bravo
I remember reading about memristors in an ieee magazine back around 2010, I was a freshman taking intro ee classes. Even back then neural nets got a lot of attention and the article talked about how memristors could be used with neural nets, I remember thinking this was going to be huge, and then nothing for a decade. Cool to see it mentioned now in 2024. The future can come fast or it can come slow, but there's no stopping its arrival. What a time to be alive.
This reminds me of Pluto, where one of the roboticists, says that to create the “perfect” robot (the most like a human) you need to introduce chaos to its mind by flooding it with countless different personalities and strong emotions and let it bring order to the chaos, this may or may not end up creating a robot that is a monstrous psychopath but it will result in a robot most like a human for good or bad.
A company is using 100 lava lamps to introduce chaos for using in cryptography. Maybe we need to create a tiny chip with 100 micro lava lamps to create a particular personality, making it impossible to predict their actions. It always was a problem in computing science, how to generate random numbers from nothing.
14:42 its the shirt for me
Your description on how the brain processes data reminds me of OpenCog's Atomspace.
That uses a mixture of different structures joined together in various ways to both represent knowledge and process data. It is a metagraph based system, where the brain (don't quote me on this) seems like a graph based system.
It's weird to see a line of similarity between those two things.
The most important thing is to continue learning and updating existing methods
A small moment of "Deja vu" at 13:53 and 14:09. Excelent video though.
That digital neuron could be a bit better by using DRAM instead of SRAM.
Speed is obviously not needed, and DRAM use 8 times less space on the silicon than SRAM.
The problem is that you need to keep refreshing it as its essentially a tiny capacitor.
4:06 should be a high signal-to-noise ratio.
I wonder, as neuromorphic computing develops what would software be like? If memory and computation can be combined, could the line between software and hardware also be blurred? In the brain, is there a difference between the substrate and its program?
Probably, you are just going to use some different python libs for that :-D. Compiler developers will care about the rest.
2:00 XXI century materialism crisis
Maybe that feeling when you hit your funny bone is your clock nerve shorting to the muscle activation nerve! Perhaps we have a global clock line!
Take a look at Susan McKinstry's work on moving away from von neumann architecture, its pretty cool and is another road for this type of work. She's a professor at penn state and has videos online of her research.
"The work goes on"
Powerful
Good explanation of the differences. Using a digital approach we can only simulate a finite number of neurons. Hardly comparable to a brain. Perhaps the analog nature of Quantum computers might get closer, but they will be limited to even a smaller number of neurons. I think the current limit of qbits is less than 200.
We do not know how memories are stored or how the brain computes. We know synapsis have something to do with them we can destroy the synapses and the memory trace remains (Tomas Ryans work and also mentioned by Randy Gallistal).
14:20 did you stutter? 🤣
Audio error at 14:25
Oh that's a neat extrapolation of sci-fi at the end. the Mullet joke? no. That was bad.
I got a good flash of a cool Cybernetic Sci-Fi setting where instead of humans with cybernetic impacts, it's robots with memristor or IBM true-north type stuff to make them random more similar to biological life.
1:30 you could say the brain is the one example of a quantum computer in nature
A parallel processor also dont need fast clock speed if it can access all of memory, while getting visual, auditory and tactile feedback simultaneously being processed
While you simultaneously maintain a full conversation....
Everything the brain does is technically relativistic
What's the clock speed of my brain while on lsd and dmt? Probably can hit more exaflops too...
Ever see what brain activity looks like on mri or brainwave scans?
Definitely "100%" Brain activity
So much brain activity parts of your brain that don't normally communicate start communicating
Like conscious and unconscious parts of your brain matter
Like a fpga style programming to brain? These drugs do cause nerve cell growth, but much more controlled than stimulant drugs like amphetamine or cocaine (which lead to issues..) and can reconfigure the internal "data bus" of your brain structure
Don't forget the spinal cord is also part of the brain and is part of the unconscious reflex system
You answered the question in the video.
The brain is only capable of achieving that low of a power consumption due to it making mistakes.
Computers take very high amounts of power, but they are deterministic
Memory bottleneck is more serious problem than moore's law, a senior dev in my team once wrote a small C code to show how memory bandwidth limited modern CPUs are, the code would just read serially from an array of 1 Billion 4 Byte integers and time it with elements parsed per ms, on testing even a single ZEN 2 core would saturate 40GB/s dual-channel 3200 DDR4, let alone 8 cores, its almost magic how CPUs and GPUs can still work so fast being so bandwidth starved, specially on servers.
How can you gauge Moored Law? Noise and non- linear seem to be a path problem.
nice video.
Repeats 2x @13:55: "and lastly the brain is an analog device, digital devices are an ill fit for replicating their behavior
so we need to incorporate the analog element, which limits the systems flexibility"
From an application perspective, would've been worth mentioning Event Cameras as well.
Mullet joke got me :'))
Oh could you mabe do a video on the future of analog computing?
Anywa, love you video's.
video on analog computing would be dope
As a software developer, something that always seemed weird about robots and AI in science fiction was how many were described as having electronics dedicated to various functions, like an "emotion chip" or "justice circuit." I thought it would make much more sense for them to use general-purpose computers were all of these features were programmed in software. Having "algorithms" or "subroutines" implementing these human qualities would sound more realistic. The terms used to describe the actual design was usually just arbitrarily picked by the author and not relevant to the story, except maybe as a plot device where individual hardware failures or lack of parts availability could render a subset of a robot's "personality" nonfunctional without affecting the rest.
Now that AI is being developed and commercialized more it turns out some of these functions do in fact make more sense to implement with specialized hardware.
It's an easy and safe plot device because it's modular, plug, and play.
If the "improvement" turns out to be shit on the ratings, the cast can easily take it without rewiring everything inside the character that is a robot/ai.
Isn't our microbiota and other factors like those dedicated chips?
Emotions in the brain are mediated by neurochemicals release in response to conceptual interpretation of stimulus. A typical person will recognize a facial a d environmental actor stimulus like a family member and their brain will send an oxytocin response from interacting with them which will strengthen that neural pathway with the internal conceptualizations of comfort and safety if the actions and the interpretation of the external person's action reinforce it.
13:58 this is repeated twice
@13:57 repeat on purpose or glitch in the neural net?
I was NOT prepared for the mullet similie!!
Excellent explanation, great video. This is why I subbed.
i can literally feel what you are describing about the brain computing idk if its the adhd or something else
If neuromorphic hardware were to be built in solid-state-chips, could they mimic neurons ability to grow, die, multiply, and morph their shape? How important are these aspects for the brains ability to compute?
This really helps explain intel’s pivot towards NPUs for me thank you for making the vid. Like of course AI is the current trend but this really helps show how it can be more than just a trend.
The truth is because researchers have not manage to implement quantum pathways into hardware.
Random chaos? Check out the recent discovery that quantum mechanism is found to be key in photosynthesis. This area of research to make brain like circuits requires quantum biology
like a mullet hahaha. Thanks for making informative videos. I missed a few recent ones and I'm just now going through them
Thanks for the great view, as always, on the subject. In electronics academia we talk only about this and to be fair im kinda tired, but your opinions are very refreshing
I'd like to hear more about isentropic / reversible computing
would be dope as well
This was a very nice video, thanks :)
Making brain computers is quite easy.
Its called having a child. Aka a new brain cooperation partner.
Costs extremely little too. So little it was even possible tens of thousands of years ago.
Training takes too long and is too expensive though
@@rizizum Costs almost nothing to train. Mimics by itself. In fact the most successful schooling systems are mainly powered by children interacting with each other, like the ACT system.
Very automated and very scalable when done properly.
reproducing human brains cost almost nothing nowadays, but reverse engineer them is the toughest part
"Best of both worlds, like a mullet." Well done
I have a lecture in 1999 at Xiamen Da Xue in neuro-networks to grad students and some faculty. I was not yet an academic but was teaching computer science. The lecture bombed. And I was told by a kind open minded discrete mathematician on the faculty that my lecture would be better received in the medical school. No one was interested.
Very interesting and good video. Would also love to watch video about bio-computers...
Thank you for this video. Very educational.
The mullet joke at the end was priceless. It was the delivery. 🙂🖖🏻
Business in the front, party in the back.
Thanks, near the beginning you said that traditional CPUs achieve low signal to noise ratio when you meant to say high ratio.
Or you meant to say low noise (total) as opposed to low noise ratio.
Its sort of ironic that this lovely peep into brain analogs has its own Matrix moment - The cat walk past the door - Twice.
Look for that around 14:21
Did you do that on purpose ?
Always fantastic quality !!
Can you explain how human memory works?
Is there a HDD somewhere up top?
There is a publication in 1988 Biological Cybernetics to mimic neuron with an AND gate, so a digital circuit could fit afterall? How could people missed it for more than 35 years?
The vocal delivery is so serious and then theres a joke that throws me off
Good video. I thought the references you made were a smart inclusion because I thought for sure they’d be great engagement bait, but I don’t see any comments about them. I suppose in the only idiot here so I’ll say, man i love It’s Always Sunny. Also quite funny you used the avengers meme quote, quite randomly.
Hi Jon. Why did you duplicate the section "Brains are analog" straight after one another?
He was testing us - wanted to see how many would notice....😊
For emphasis.
How is the RNG done for IBM TrueNorth? How many RNG modules effect how many and which components? It seems like it would be one of the better candidates for a sentient AI, yet it looks like nobody has tried maybe eeking out some sentience that may nudge the RNG to exhibit some agency one way or another? An Aaronson Oracle can predict if a human or code is trying to win at rock paper scissors, perhaps this test could show if TrueNorth is behaving more like a human if programmed to play optimally. If it drifts in a chaotic way like humans do this could be a test for sentience or agency.
What about using a quantum computer for neuromorphic computing?
As yet, there does not seem to be much evidence for the brain using quantum mechanical effects in any computation.
So, if you want the hardware to resemble the mechanics of the brain, quantum computers don’t really seem like the thing to do (given current understanding).
(Also, the quantum error correcting codes developed so far aren’t good enough given current hardware to get error correction to break even, I think.)
@@drdca8263
There's also not much evidence that the brain use "computation" in the classical sense at all, it is just an utility word we use to represent the combined goal-directed apparent activity of neurons. But we do know that our senses have something to do with the quantum mechanics in a sense, and we do have (limited and specific) evidence about some (possible) quantum interactions of the brain, such as the quantum smell experiments. And we have some more limited evidence that it appears to be the case that quantum mechanics and indeterminism have something to do even with unicellular cells (like the Martin Heisenberg work on the topic). So while this is mostly theoretical and have limited evidence (as any attempt to explain the brain and consciousness in general) it is far from being something absurd or not grounded in anything at all.
@@diadetediotedio6918 “but we do know that the senses have something to do with quantum mechanics in a sense”? Huh?
And yes, I did waffle on whether to say “computation” or “multi-cell correlation of behaviors” or whatever.
Still, the point remains: if we want to do computation on devices that work physically in brain-inspired ways, short of “eventually maybe we could simulate the cells at a molecular level”, we don’t have anything suggesting any *particular* way that a quantum computer would be well-suited for that.
Party in the back.....
Priceless 🧠
just to niptick about neurons : while axones are indeed output only, dendrites can be both input and outputs
Repetion of "and lastly, the brain is an analog device...." at about 14:00
The Von Neumann bottleneck is not the reason why neural computing is popular. It is pretty easy to make hardware not based on that model. It is because neurons take several deep learning layers to simulate just what we know about neurons ability to communicate, and we still know very little.
Pretty hard to beat the evolution and i love the beauty of the life trying to optimise itself.
Not all white papers are legit…
I looked at the 1971 memristor white paper and it seems like it’s reaching for greatness like it wants to
be first to coin the term, in case someone else comes up with a working system.
At the time integrated circuits and microprocessors were brand new cutting edge stuff.
We had the naive opinion that the human brain could be replicated as AI (Hal) with this (new for the time) technologies.
This paper though interesting historically seems to be more about publishing papers as a requirement to maintain academic status than pushing the state of the art forward.
I could be horribly wrong but after 52 years nothing came of it. Not even by the Soviets or China.
Yeah, I was around then…🤣
Brilliant content man, and what an epic ending 👌😁 creme de la creme
"Like a Mullet," had me dying.