As a young PhD student in Computer Science, your explanation of how neural networks come to be and evolved, and the math behind it, is the cleanest and most accessible that I have come across. As I focus on computer architecture, I came to this video without much expectations of learning anything new, but I am glad I was wrong. Keep up the great work!
Fun fact, that Graphics Card he’s holding at 18:00 is a Titan xp with an msrp of $1200, he says it draws 100w but it actually draws about 250w, so that tiny chip that only draws 3w is even more impressive
In general when doing machine learning you are only using the CUDA cores of a graphics card so the wattage never gets close to its maximum. A lot of the processing units are simply not being used, for example shaders and 3D processing units. For example on my GTX 1080 I sit between 60-90w out of 200w when doing Pytorch machine learning. So I think 100w out of a maximum effect of 250w seems reasonable.
you can underclock GPU's, thats what they do in cryptomining to improve their profit margins, depending on the chip they can operate effeciently at a fraction of their nominal power
One of the first things I learned in Electrical Engineering is that transistors are analog. We force them to behave digitally by the way we arrange them to interact with each other. I'm glad there are some people out there that remember that lesson and are bringing back the analog nature of the transistor in the name of efficiency and speed.
Well, semi analogue. Don't forget the bias (voltage drop) before you get current amplification. Also, to say that analogue computers are more power efficient that digital is pretty hard to back up. A $2 microcontroller can run on a few mA for the desired task, then sleep on uA. You'll need at least 5mA for an analogue computer to start with and you can't make it sleep.
@@JKPhotoNZ Great point. And with current Nano transistor technology, That efficiency (along with raw power) is going far beyond what a true analogue system could produce.
Yeah but then you'll never know at which zone is it on? Because amplification happens differently for different input parameters. And not all transistors from the same batch would perform the same, i.e. it will lack repeatability (as Derek mentioned).
the insoluable (even in theory) problems of analog are noise and signal integrity, which is why he didnt even mention them. This channel has gone to poop honestly.
Analogue was never meant to die; the technology of that time was the limiting factor IMO. It appears like Analogue - Digital hybrid system can do wonders in computing.
I know this is an old comment, but I figured I'd add that as far as physical packaging goes, nothing stops us from putting one of these next to a conventional CPU. Cooling it would be the hard part as the temperature would swing the outputs by introducing noise. Might be better as an M.2 PCIE device.
@goldenhate6649 As we've seen they can be build on NAND processes already, which are widely adopted by consumer electronics. The use case provided of low-power wake-word and condition detection seems like a great application if they can find the right product in the consumer space.
For amusement only: My first day at work was in1966, as a 16 year old trainee technician, in a lab dominated by a thermionic valve analogue computer (or two). These kept us very warm through the winter months and cooked us during the summer. The task was calculation of miss distances of a now-obsolete missile system. One day I was struggling to set up a resistor/diode network to model the required transfer function, but the output kept drifting. I found the resistors were warming up and changing value. Such are the trials and tribulations of analogue computing....
Thus the temperature coefficient is very important for recent precision devices. And a high accuracy low ppm resistor is expensive, which is one of the reasons why it costs so much for high-end electronics instruments.
I was going to comment that one disadvantage of analog computers is keeping them calibrated. If you want a precise amount of 'voltage' or movement to represent a real-world value, you have to keep it calibrated. Older mechanical ones had wear/tear, electronic ones have issues as well.
I see Derek is getting into modular synthesizers! Also, funny to see how the swing in musical instruments to go from analog to digital and back is being mirrored in computing generally.
As a semi-professional music producer with almost half a decade of working with professional musicians I would agree - and this is mainly because people feel a lack of “soul” in music. Those small human errors that we’ve spent decades trying to get rid of with Autotune, Drum machines, Sequencers, digital synthesiser and digital samplers (the last two CAN create sounds that will always come out the same way as long as the input stays the same - however there are exemptions). This is probably something the people I know in the music industry refer to as “The generation-rule”, in brief the music today is a result of what our parents and grandparents heard combined with new technologies and pop culture. - If you’re interested in music and maybe want to stay ahead of the game look it up. Some refer to it as the “30 year rule” as well.
I had a business analysis course that tried explain the perceptron and I didn't understand anything. I don't have a strong maths background. This video is pure genius. The way you explain ideas is amazing and easy to understand. Thank you so much. This is my favourite channel
My professor always said that the future of computing lies in accelerators -- that is, more efficient chips that do specific tasks like this. He even mentioned analog chips meant to quickly evaluate machine learning models in one of his lectures, back in 2016! It's nice seeing that there's been some real progress.
That's pretty much where things have always been. Using basic building blocks that do specific functions. A linear voltage regulator has the job of maintaining a constant output voltage for a given set of current levels and different input voltages. You can buy an Opamp and using resistors to make a function called a schmitt trigger. Or you might just buy a schmitt trigger from Texas Instruments and put it onto a board with less board space consumed. Or a schmitt trigger might be embedded for free in certain other ICs (intergrated circuit). The major computing engines I have seen so far have been effectively GPUs, CPUs, and FPGAs. Xilinx &Altera (now Intel) have specialized in making FPGAs. An FPGA's basic internal components are logic elements with flipflops with reset, aysnc reset inputs, 4-input look up table, etc. Cascade these to make larger units like a multiplexer, floating point arthimetic unit, etc. Its programmable so you can effectively emulate a worse performing specialized CPU. A CPU is still more effcient at doing CPU type functions. A GPU does specific stuff as well. The idea of doing analog computations honestly just sounds like another building block to add into a complex system. Only there simply hasn't been a large enough demand to require the generation of specialized hardware like what was described in this video. That one start-up sounds like its developing a chip that will do a series of very specific functions and will need to be integrated into a large systems to accomlish a specific task.
Well that have sort of always been the case. I don´t know what was the first accelerators, but one of the fairly early once was the FPU. We nu just take it for granted. Sprite accelerator was also fairly early. Then graphics accelerators. Then video decoder/encoder Then MMU accelerators Then 3D accelerators. Them SIMD accelertors. Then T&L accelerators Then physics accelertors Then raytracing accelerators Then deep learning accelerators.
@@calculator4482 FPGAs have also been around for decades, plus they draw more power. I'm a computer engineering student right now currently designing a CPU to be synthesized onto an FPGA. I'm not dumb.
started watching this channel when I started high school and now that I'm about to get a phd in mathematical logic, I've grown an even deeper appreciation for the way this channel covers advanced topics. not dumbed down, just clear and accessible. great stuff! (and this totally nerd-sniped me because i've been browsing a few papers on theory of analog computation)
Absolutely. Love the way he covers the concept for everyone. Those who don't know in depth about it can still go away with a sort of basic understanding. And those who do understand it in depth will enjoy discovering new areas of invention that they can further explore. Looking forward to reading some papers on using analog computing in neural network applications
I am a process control engineer, born in 63. In the 80-ies we used analog computers to calculate natural gas flow - for the oil and gas company. A simple flow computer was around 10 kilos, full of op amps and trimmer pots. It was a nightmare to calibrate it. :)
I think Nasa (or was it still Naca at the time?) was able to simulate flight caracteristics with analog circuits too. I'm thrilled to see this tech coming back !
AI researcher here, you did a great job on this. For anyone interested the book Perceptrons by Minsky/Papert is a classic with many proofs of limitations and explorations of the limits of the paradigm. It still holds up today and its fascinating to read what scientists were thinking about neural networks during the year of the moon landing!
Great job in the AI subject but... is this analog computing thing really happening? He only goes to one company which no longer exists BTW... I don't know
Mythic still exists and they're funded by Blackrock and lockheed Martin so they won't disappear any time soon (source: the footer of mythic's website)@JoseAntonio-ng5yu
As a 70-year-old boomer my technical education involved building and testing very basic analog devices. Thanks for this video, it helped me to a better understanding of neural networks.
@@vedkorla300 You have plenty of road left to travel, follow what you love, enjoy the journey, don't let the bumps in the road stop you and if you can get a soul mate to share it with you it will all be good.
as a NAND flash engineer that bit about the usage of floating gate transistors as analog computers is interesting. particular because in flash memory there is a thing known as "read disturb", where even low voltages applied to the gate (like during a flash read) to query it's state can eventually cause the state itself to change. you would think it is a binary effect where if it's low enough it would just be a harmless read but no...eventually there will be electron build up in the gate (reading it many times at low voltage has a similar effect to programming it one time at a high voltage). in this particular application the weight would increase over time the more you query it, even though you didn't program it to be that high in the beginning. it's interesting because, it's sort of analogous to false memories in our brains where the more we recall a particular memory, the more inaccurate it could potentially become.
I’ve been an engineer for 44 years. Great video. I actually worked on analog computers in the 70s when digital processing was still new. Never to this level though. Great job!
Its amazing how fast all of this is evolving. Looking at this, and comparing it to facial recognition software in simple phone apps we have now really shows how much all of this has influenced what kids and teens easily use today.
I didn't catch the part where he quit talking about analog systems though when he went to the logic systems being used for matrix operations, because that was digital. There may have been analog inputs into the system, but there's an A to D conversion, and everything he showed at the end was strictly digital, so a bit misleading there. Current systems for AI are digital.
This is very well explained video on analog computing. Never could I have thought the topic of analog computing can be put out in 20 minute video with such a phenomenal animation and explanation. Respect your work and effort to make science available to all for free. Respect 🙏
It’s funny, for those of us who are into electronic music production, analog never left! There are lots of great analog synthesizers out there that can produce all kinds of complex waveforms, and some of us have been known to tack an oscilloscope on to modular gear to view and use those waveforms. Even some relatively simple gear can produce complex, “3D” structures with the correct cable patches. A lot of what you described at the beginning is the backbone of synthesis for music, and the same principles obviously apply to mathematical operations.
You can do everything digitally that an analog system can do and more. An example is resampling in order to change the frequency scale of a recording. This can be done in real time using digital methods, not so much for analog methods.
@@rogerphelps9939 Depends on what you’re doing and what’s important to you. Analog synths are great for experimenting with the knobs and patch bay (if available) and learning what exactly each change has on the overall waveforms. They’re really great for learning what exactly you’re doing and what you’re getting as a result. Yeah, there are software synths meant to emulate hardware knobs and a patch bay, but I haven’t found clicking through all that as valuable as plugging and experimenting yourself. That stuff really depends on the person, though. What doesn’t depend on the person, and is arguably more important, is the fact that aliasing can end up being a problem on digital synths. When you start doing some crazy cross modulation between sources and/or you’re dealing with lots of harmonics, if the processor can’t keep up, your sound will suffer. Same with super high frequencies. Depends on the synth, of course, but analog synths can tend to have a warmer, purer sound to them as well, because you don’t have to emulate all those harmonics. It really comes down to the same arguments being made here regarding analog computers: there’s no processor overhead needed to create some very complex shapes, and to do so perfectly accurately, on analog. I use both types of synths, as lots of people do, and I would never say that one somehow makes the other unnecessary. Hell, there are hybrid synths that give a mostly analog signal path while allowing for, say, a digital sample and hold circuit and the ability to save certain parameters. People make those kinds of things for a reason, you know?
@@rogerphelps9939 Difference is that you need billions to trillions of transistors to do digitally what can be done using tens to hundreds of transistors analogously.
@@RAndrewNeal Wrong. The errors arising from component tolerances, noise and temperature dependent offsets make anything complicated pretty much impossible in analog. Transistors in digital processors are extremely cheap. Provided you have good DACs and ADCs you can do anything to whatever precision you need in digital.
This actually helped me a lot to understand how neural networks work in general. For me it was kinda like black magic before. It still is to an extend but to know that moden Neural Networks are kind of more complex multi-layered perceptrons helped a lot.
Yes, indeed. Something I find fascinating are recurrent networks, where some neurons feed back into the network which would allow some information to be saved from one image to the next one. This would allow the AI to process things that change in time, like music and video. For example, if you're tracking a subject with a camera and he turns around, a recurrent AI would be able to continue tracking the subject.
The video seems to contain some quite biased infos though. The top 5 error rate of humans of 5,1% is of course not accurate. If human beings were that bad, we accordingly would have much, much higher car accident rates. Those kind of inaccurate percentages come from statistics based on captchas. And several conditions do distort the results there. - human users often don't bring up the needed concentration and attention to solve captchas as they actually could. In fact they are angered by them and often times just click quickly through them. In traffic while driving a car human beings are much higher on alert and do tremendously less mistakes. Here human beings still beat autonomous drive computers by several orders of magnitudes, measured in car accidents per million driving hours. - the captchas often do not meet the actual human perception. The captcha images are often unclean, got low resolution and distortions. In the real world humans perceive much higher quality from their surroundings than some crippled captchas. A more clear image increases the recognition dramatically. It is really crucial in educational videos from whom and from where you take your numbers. Science is not always that objective as we are told, especially when corporations are funding them financially with own interests. Other than that the video is quite interesting. I also wished though that many common misconceptions would have been cleared up. For example many people still believe, that computers would work like human brains. This is plain nonsense, mostly spread by science fiction. The brain still does pose big mysteries to us especially "the big problem of consciousness".
It is still kind of voodoo or black magic. While the overall working mechanisms are well known and the output can be estimated based on the input, how the neural network exactly reaches to the answer is nearly impossible to inspect because of the sheer amount of variables. In essence, you feed a black box with something and you can expect it to give you a particular answer with some confidence, but no-one has any damn idea what exactly happens inside the black box.
i recall our control system teacher at the university in the '90s said Space Shuttle flight controls contained analogue computing because it had to process like several thousand sensors' input to produce outputs and digital was just too slow for the job.
@@rogerphelps9939 the Space Shuttle did indeed use analog computing for some of its flight control systems. the Space Shuttle used a hybrid digital/analog system for flight controls. Most of the high-level control logic was handled by digital computers, but critical low-level control functions were performed using analog circuits. The analog components were able to process sensor inputs and produce control outputs much faster - on the order of microseconds - compared to even the fastest digital computers of the era, which took milliseconds. This speed was essential for stability during flight.
I can imagine it too be also just more fault tolerant. Discrete = hard, continous = easy. An overflow in digital can literally crash a whole system. In analog there is more room for error.
Mythic Gate's approach to work: Kind of reminds me of the copper filament memory planes with ferromagnetic rings representing the bits used as memory for the AGC (Apollo Guidance Computer). The video on which this memory is based was released by Destin, aka Smarter Every Day, and accompanied his and Linus Sebastian's meeting with Luke Talley, a former IBM employee and, at the time of the Apollo missions, a member of the data analysis teams responsible for the Analysis, evaluation and processing of the telemetry data received from the Apollo instrument ring.
My undergraduate work was actually with a professor who did research in the brain as an analog computer and using neural networks and analog computing as an attempt to achieve super-turing computation. A researcher who's name is worth looking into in all this from my research would be Hava Siegelmann. At the time I understood much less about the problem. My task was essentially to try and prove that analog computation could be modeled with a neural network on a digital computer. Not sure if my comment will be buried or not, but it's an area worth looking into if you're more deeply interested in this problem.
0s and 1s . high and lows. voltage and no voltage (digital representation of numbers); have absolutely nothing to do with brain neurons. it's complete BS . For all we truly know about the brain there could be a near unless amount of information with every firing of a neuron. We have no idea what format conciseness information is in and likely never well as "in time" humans.
As a guy who helps manufacture flash memory I find this really intriguing: especially because flash memory is continuing to scale via 3-D layering, so there’s a lot of potential, especially if you can build that hardware for multiplication into the chip architecture.
@@ravener96 With many ML algorithms you can split problems into multiple sub-problems for different networks to handle. I wonder if developing that area of ML would be helpful to make effective analog systems? For an example, in image processing a pixel at the top left of the image has little interaction with a pixel in the bottom right of the image compared to nearby pixels. If you wait to compare them until multiple layers later, it speeds up processing the image and allows for algorithms to become more adept at finding sub-patterns in the image.
@@Zeuskabob1 Depends on what kind of images processing the neural network is doing, if the computer wants to identify a face in a person maybe it doesn't need to process all pixels once it has processed all the pixels near the face, but in some cases distant pixels can indeed be correlated, like the images from a camera in an autonomous car identifying the white lines of a street, where it could be 99% sure it is a straight line but the corner pixels clearly indicates that is curve line.
Flash cells are micron scale while the AI accelerators doing integer operation are built with the latest 4 nm technology. And floating gates have really limit life compared to pure logic circuit.
Freaked me out to see that opening analog plug board. That’s how I learned programming in my first data processing class at Fresno State University-in 1964. Eerie to have that memory rise.
I graduated Fresno State in 1993 with a BS in CS (after a stint in the military). I even took an artificial intelligence class. I still have my perceptron book.
I remember seeing an analog differential calculator in high school in my physics and electronics teacher’s lab. It was more of a museum piece. It was never used. RIP Mr. Stark
@@rogerphelps9939 The words of someone who knows nothing but his own little world. And he is content with it. Honestly, i'm jealous. For real, stay that way or life will get an awful lot harder. I would give everything i have to aquire such luxury.
The problem with this system of computing is that interference is a huge factor. When you only test to see if there is voltage or not then you don't need to worry about interference. But when you get into building systems that use varying voltages say 0v 0.5v 1v then you need to worry about interference, and the more numbers you add the greater this factors into being an issue. Interference could come in the form of microwaves, radiation, mechanical vibration (think fans cooling off a server rack,) and the list drags on as almost anything can cause interference. That ostiliscope used in the example is an expensive piece of equipment that minimizes interference, but the cost is far higher than with a binary number system.
Binary computers have to deal with interference too, that's handled by error correction. Error correction is already baked into the infrastructure of every digital component, to the point where we don't realize it's there. They suggested one method of error correction in the video, and they're probably not even scratching the surface of what's possible.
i think the crux is the medium, the AI models brain which is so good at re-building itself, and it uses electron and chemicals to convey information. our transistors are too limited to mimic that interaction. for example the new 3w chip needs to be custom made for each model if i got it right.
@@introprospector Yes, but with binary error correcting is simpler as interference isn't as much of a burden on the architecture. When the job is to check if there is, or isn't voltage it is a lot less complex than checking 8 different voltage thresholds.
@@fltfathin This is called an ASIC (Application specific circuit) the computer is pretty much just sent to the landfill after it's outlived it's usefulness instead of having the ability to be repurposed. Which is another concern about where computing in general is heading as PCBs use less and less semi-precoius, or precious metals there is less incentive to recycle.
@@dust7962 Yeah but I put my mobo in the case first and then the radiator wouldn't fit, so I had to take it out and install the radiator first. It was really annoying. I didn't watch this video, but I'm assuming this is what he was talking about.
I used to calibrate analog computers that ran experiments and test equipment. They were often odd mixtures of analog and digital technologies. Near the end I had to keep a few machines alive as they aged out of tolerance, there was always a way you could tweak out some more performance by shifting the calibration away from areas you didn't need in a much more forgiving way than any new digital could.
wow that analog chip sounds extremely competitive. im surprised they already have something that good. mad props to the guy who figured out the hack with the flash storage
integrated circuits like that have always been really efficient - the downside is that they are extremely specialized... as the guy said: its not a general computation chip it can literally only do matrix multiplication - but that it can do really damn efficient (though slightly imprecise - which likely still is good enough for neural network purposes since they arent interested in the exact value of the result) so...sure thats competitive for that one purpose - but useless for anything that isnt that purpose but if the type of calculation that they can do is in high demand - they likely can sell a lot of specialized hardware either for specific devices or plug-in cards for computers that supply fast matrix multiplication operations
I've been wondering about analogue usage of SSDs for a long time. It's an oversimplification, but each cell holds a voltage which can also be interpreted as an analog signal. If we take music as an example, you could basically write the value of one sample point to a cell, writing 16 bits worth of information to 1 NAND cell. This of course makes it impossible to compress the music, but it would allow to store music 'losslessly' at the same cell usage as a compressed 256kb/s file on TLC storage. Of course, NAND reproduction isn't perfect (and as such, music reproduction wouldn't actually be lossless), but I wonder how close this would come compared to the compressed digital file. I think this could be potentially useful for offline caches and downloads from Spotify for example, as the data can be checked and corrected when a high speed network connection is actually available.
I just found out that the SR71 engines had a hydraulic computer that ran the system. That would be amazing to see. I worked at Caterpillar and a friend of mine was tasked with converting a craper transmission module from a hydraulic base to electronic. It was a very old design and all engineers had passed on. They had a team of engineers that had to replicate all hydraulic functions into an electrical equivelant. It was fascinating to me. One of the functions they had to replicate is going up a steep hill with a full load and being able to shift without rolling backwards. Holding the load, sharing the load with two clutches, and increasing one clutch while reducing the other clutch to make it a seemless shift. So enjoy this topic. Thanks!
The most interesting thing I found in this was when he was saying that in the chip they had to make it alternate between analog and digital signals to maintain coherence. It's interesting because the brain does something similar where it alternates between electric pulses and chemical signals.
The problem is machine learning is still not capable of being used commercially in general environments (e.g. security cameras) because they can't handle unpredictable situations. The brute force method of AI is still the only solution for general environments (e.g. self driving cars)
@@chrisfuller1268 people also suffer from the brute force nature of processing information, aka confirmation bias. Thankfully we can take steps to correct this, but many people lack the tools and mindset to make these self corrections.
@@riskyraccoon Yes, humans are flawed, but we are capable of recognizing objects no matter what else is in our field of view. This is a task machine learning will never be able to solve in 100% of all possible (infinite) environments. Brute force AI requires more development effort, but is capable of also identifying objects in many environments. This is why machine learning is a step backwards in technology and why it should never be used in life critical applications.
@Adam H Amen, I never thought of the beast as an AI! The beast will be cast into the lake of fire so I believe he will be flesh and blood human with a soul, but the 'image of the beast'!
@Adam H yes, that is a very interesting way of looking at it! I think we're a very long way from an AI being able to reason, but we've been using AI to kill people for decades.
Rosenblatt’s Perceptron was essentially a one-neuron network, although he could perform logical operations on the binary data inputs before passing results, which gave it more power. Minsky and Papert at MIT were concerned that Rosenblatt was making extravagant claims for his Perceptron and scooping up a lot of the available funding. In their book, “Perceptions”, Minsky & Papert proved that one-neuron networks were limited in the tasks they could perform. You could build networks with multiple Perceptions, but since Perceptrons had binary outputs, nobody could think of a way to train networks. That killed funding for neural networks for decades. In the late 1980s, interest was re-kindled when John Hopfield, a physicist, came up with a training technique that resembled cooling of a physical spin-glass system. But the big breakthrough came when the error back-propagation technique was developed by Rumellhart, Hinton &Williams. In this, neurons were modified to have a continuous non-linear function for their outputs, instead of a thresholded binary output. Consequently, outputs of the network were continuous functions of the inputs and weights. A hill-climbing optimisation process could then be used to adjust weights and hence minimise network errors. The rest is history.
@@slatervictoroff3268 I have to disagree. Perceptron is one-neuron (one neuron that receives multiple inputs and puts out one output). This makes it also one layer network I would say
@@brunsky277 thinking about it though, the inputs all had their own weights, those weights correspond to a neuron. A modern ai model has inputs, and the weights exist on the layers. Therefore the perception had 400 inputs, 400 weights, and 1 output signal. That implies to me 400 neurons, in a single layer, leading to a single output value.
Using an analog computer to demonstrate differential equations is a perfect teaching tool. I really wish tools like this were used in colleges more often.
They already use crap like this far too often. This isn't something use for for a differential equations course. Maybe it would be ok for a circuits course or even Computer Science course focused solely on analog computers. In math, just give us the numbers and the logic... Don't waste are time with this stuff.
@@JackFalltrades I'm not calling letting students know of use-cases for things crap. I'm calling taking up class time that is meant for teaching students the logic of how to solve differential equations (because that is the class the original poster said it would be good for) and instead using that class time to teach something that only a tiny fraction of the class would ever use.
@@Elrog3 He never said it should be used in a differential equations course. You just sound like the type of students that go to university and ask which courses they need in order to get a high salary position in industry.
The biggest issue is distortion. Inexact calculations due to imperfect components, degradation of the data when transmitted (wired or wireless), external EM interference, all conspire to make the use of analog a special challenge. Mixing digital and analog to play to the strengths of each along the way intrigues me. I'm old enough to have experienced the full evolution of digital computing. My mindset is therefore quite biased toward it. What you propose would be quite the eye opener for me, if it actually can be made to work as prolifically as current digital technology.
I assume there's a lot to be discovered on the topic of self-correcting algorithms, or even error-correcting analog circuits that compensate partially for the inaccuracies. Like what Hamming codes are for digitally transmitted data.
Distortion really? I am under the impression that the biggest problem with analog computer is NOISE. You can never get rid of noise in an electrical system. Even if the hardware has no distortion, the inherent thermal noise in the system will cause some small calculation error.
@@StevenSiew2 that's true, but noise is something that AI needs to deal with anyway because the inputs will always be noisy to begin with. It can actually be useful to _add artificial noise_ while training a digital NN, to avoid overfitting issues. (Stochastic gradient descent can also be seen as a way of making the training “noisy”). As long as the pertubations are small and random, training won't be affected negatively. Distortions however are hard to deal with. You may be able to train a model on a particular chip that has such and such distortion; because the distortion properties don't fluctuate and constant-but-unknown biases, the weights will ruthlessly overfit to this particular chip, and then it probably won't work at all on another copy.
As a PhD student in this field, I can answer some of your questions. Yes, we usually talk more about noise than distortion. And thermal noise is not the only issue, there is read and write variability, resistance drift in time, the resistance of interconnections, ... However, it is true that neural networks can sometimes take advantage of the noise to avoid overfitting, but only a reasonable amount of noise and only in some cases. Self-correcting algorithms and error-correcting are options, but it's not that easy. Usually, this kind of method sacrifice the performance or requires more energy (which is the opposite of what we want). About the mixing digital and analog, they presented it nicely in the video, but the digital/analog converters required a lot of energy (sometimes more than the vector-matrices multiplication itself). So we don't want to do it too often.
I remember the first time I heard the term "deep networks", it was back in 2009 when I was starting my MSc (using mainly SVMs), a guy in the same institute was finishing his PhD and introduced me to the concept and the struggles (Nehalem 8 cores era)...the leaps in performance made in NN since then thanks to GPGPU are enormous
A couple missed points: Things like google's Coral are also pushing incredibly high values, and to my knowledge are doing it digitally as an ASIC. Large models are expensive to train, there's no contention here, from mythic, you, or the wider AI community, but several advancements have been made in the last couple of years, that are letting models be compressed and refined to less than 1% of their original size. This makes them incredibly small and efficient operations, even on traditional CPUs.
I'd love to read about that! I've been dipping my toes in ML algorithms and many of the really interesting networks require an immense amount of memory to function, on the order of tens of gigabytes. I'm curious why those models require such an immense amount of memory, and what can be done to improve that.
@@Zeuskabob1 u don’t really need 10s of gigabytes to get a good model that can perform well on a task (usually). Most people still use models of size less than 5gb or so.
NO! NO! NO! Many people say I am sick in the head. NOOOO!!!! I don't believe them. But there are so many people commenting this stuff on my videos, that I have 1% doubt. So I have to ask you right now: Do you think I am sick in the head? Thanks for helping, my dear nico
Back in the mid-1960s my uncle, Joseph Grandine, designed a combination analog/digital computer that could optimally combine the two modes to solve complex problems in signal processing and data analysis. He called his computer the Ambilog 200. At that time, digital computing won the day. Now it sounds like he was a few generations ahead of his time.
this make me want Derek to talk about Neural networks and AI related topics a lot more. its not just extremely interesting but also constantly developing.
i've always mused about this to myself, i always thought 'why not use analogue to calculate certain things', theres lots of stuff in physics that's extremely hard to calculate, but just 'happens' in the real world in an efficient way, the surface of a bubble for instance minimises surface area very rapidly in a way that takes no effort on the bubbles part, but is incredibly hard for a digital pc to calculate. the tricky part (and the reason people doing this are smart scientists/engineers and i'm not) is figuring out how to wrangle "the bubble" into a portable and responsive piece of hardware, and it's super cool to see efforts made in this direction are having success
Same thought. I used this technique to figure out a way to solve mazes super efficiently with flowing water. I think that’s what’s happening with quantum computers also.
It always comes back to the drawbacks Derek mentions at the beginning of the video: Analog processing is single-purpose, error-prone, and hard to repeat. As such, for your physics example, it would invalidate A LOT of the data you get back, since you cannot guarantee a certain level of falseability, auditability, and error margins. You CAN get there, but you start requiring A LOT of boilerplate circuitry around the actual solution solving hardware. As a silly and basic example that is almost trivial nowadays, but still there, you can think of the necessity of adding a lot of surge protection and current stabilization to a circuit to ensure that the natural unsteadiness of current in the power grid won't skew your results. And that's even just taking into account discrete and "simple" issues to calculate. Imagine processing data for some chaos-related physics theory, and basically getting pure rubish at the end, because even the slightest micro-volt level of disturbance automatically distorts everything. How about external interference? Or electromagnetic interference between the actual wires in the circuit? As I said, not imposible to tackle, but you suddenly have an overhead of 90% boilerplate just to make the results useful on anything practical. I can't even imagine all the engineering that must have gone into those Mythic semi-analog chips for AI, just to keep everything tidy. The fact a Realtek-sized chip can give you one third the performance of some nVidia Quadra (or similar) card, for a fourth of the power consumption of a cheap entry-level mobile Core i3 is just astounding!
To be clear, these Mythic chips point towards a future resurgence of analog processors not dissimilar to what digital ones brought in with their unparalleled versatility. Outside of very bespoke chips for very high amounts of money, probably in the realms of very high research, science, etc; I can see a general idea of modularity at a functional level. Say, you manufatcure analog chips that can do some very important but expensive math calculations that are common for most science in some specific branch (say, a lot of transformations, or integration, maybe some Lorentzians, and so on). Then, at research groups, institutes, university, they go and do the same as electronic engineers do with good old breadboards, and DIY some complex formulae on the fly, test their hypothesis, and iterate over the formuale as needed. Imagine those astrophysicists doing 2k term polynomials, being able to duct tape a dozen chips together, the same way electronic engineers use logic gates as basic digital units, and getting the results out in a couple of hours, instead of having to write a piece of software that will take a couple of days to run, a week to write, and any mistake or failed result requires another week to debug just to make sure it failed because you were wrong, and not because you input a 5 where a six should have gone when writing all 1500 terms for one of the formulae
Yes I've always thought this too. Aside from the negatives mentioned in the vid and comment above: "if you can't calculate it, let nature do it" I've used the term 'calculate' here, but I think it applies in a broader sense. If something is too hard to manufacture / produce precisely maybe nature can do it better.
One of my first "computer" classes in engineering school was learning to wire up an analog computer and solve differential equations. Because I had to "assemble" the hardware for the process, it felt much more hands-on than when I took a punch deck to the little window, and waited for up to 20 minutes for the compiler to tell me I had no idea how Fortran worked. At the time, I really appreciated that parameters on the analog could be changed quickly in order to see how different currents, voltages, resistance, etc. affected the outcome. Of course, now with the speed of digital processors, the efficiency of Python libraries, and the Interwebs, I have largely gotten to appreciate the digital world. Now, Derek has got me jazzed to buy a portable analog. $200 on Ebay?
Yeah, my computer classes in engineering school had a similar thing, though with us it was opamps. It was not a full class, but we did it around the same time as learning FPGAs and having to implement complex programmable digital logic, so it was a good reminder of 'digital logic with an ADC/DAC pair is not always the best or simplest solution'
While it's absolutely not the same thing, I encourage newish programmers to write a 6502 emulator. It's about as close as one can realistically get to building your own CPU hands on, which IMHO gives a worthwhile different perspective to the field than the now common approach to never leave the comfort of interpreters and virtual machines.
There's an entire field of research called neuromorphic computing/engineering looking into this very problem. It was pioneered by Carver Mead in the 90s and has seen a lot of interest lately.
Great video. Never knew this was a thing. Very useful. Might one day just be an extra part on the motherboard designed for fast approximation calculations
I'm a little disappointed by the title but impressed by the content. It's less "we're building computers wrong" and more "old method is relevant in a niche application". There's also the eventual plans for fully commercial quantum supercomputing clusters and ever faster internet connections which might further limit the applicability of these chips going forward. However, building processing-specialized chips instead of relying on graphics cards seems really promising in the short term so long as the market stabilizes.
Yeah, the old Navy fire control systems - along with directional aspects of sonar/radar - were analog from beginning to end, and the math required to come up with a fire-control solution that was stabilized in 3d on a moving ship, was intrinsic. It didn't compute as we think of it today - the problem and the solution were just a single feedback loop. I remember early in my training when I grasped this, it seemed like magic. Completely steeped in digital computation in my current work, it still seems more magical.
Fascinating! I was one of the first engineers to 'train' a computer to recognize handwritten numbers. it was used for reading zip codes for post office sorting. It worked quite well and the method I dreamed up is what is used today. Namely, getting many samples (I sent pages around the office asking people to fill in boxes with zero to nine numbers. The variability in human handwriting was amazing. Then I separated each box into nine areas and a program determined if an area had a mark or not. By playing with the various combinations, and tweaking it for often confused numbers like 5 and 6, we got a very low error rate. I'm happy to see I was on the right track sixty years ago.
I agree, one point I was going to mention regarding analog computers is that they are susceptible to voltage fluctuations, environmental noise and the accuracy of your results are directly dependent on the accuracy of your equipment reading the output voltages. There is that, but this makes sense when talking about specific applications like this one 👌
My dad's "Back when I was your age" stories on computing were about how he had to learn on an analog computer, which, according to him, you "had to get up and running, whirring at just the right sound--you had to listen for it--before it would give you a correct calculation. Otherwise, you'd input 100+100 and get, say, 202 for an answer." he hasn't been able to remember what make/model that computer was, but i'm curious. any old-school computer geeks out there know what he may have been talking about? Era would have been late 60s or early 70s.
It sounds like your dad's computer was before the invention of the transistor. There was an analog computer at the electronics lab at the university of Hull, UK (when I was a student there in the 80s) that had moving parts. I remember when it became unstable and the professor sprinted across the lab to shut it down before it self-destructed. Something spinning suggests a sine wave generator for example.
Yeah, just like analog synthesizers. You have to let them warm up to a stable temperature first or they would constantly drift out of tune while playing. This was later solved with digital controllers.
I can't address your question directly but in the later half of the 1960's I worked on a helicopter simulator, used to train military pilots, in which all computations simulating flight were performed by analog circuits made up of transistorized (no IC's) operational amplifiers and servo motors with feedback. This whole machine was housed in a 40 foot long semi trailer. In the rear of the trailer was a cockpit from a CH-46 helicopter including all the controls and instruments but the windows were frosted over so you were always flying IFR in a fog, i.e. no visuals. Next as you moved forward was an operator's station where you could control parameters such as air pressure and temperature, activate failures such as engine fire or hydraulic failure and such. The remainder of the trailer contained a row of electronics racks on each side housing the amplifiers, servos and other circuits that performed all the calculations. We can look at main rotor speed as an example of how it worked. Rotor speed was represented by the position of a servo motor from 0 to 120 degrees. The position of the motor was determined by the output of an amplifier whose inputs were derived from many variables such as engine power (there were two), collective control position and altitude. Attached to the servo motor was a potentiometer whose output drove a cockpit instrument but was also fed back to amplifiers/servos which were used to calculate engine power and such. There were many such subsystems with feedback loops interconnecting them so that failures were very difficult to diagnose. Often the only way to resolve a problem was to take a guess at which part might have failed and replace it. Also routine maintenance was very labor intensive as the many potentiometers would wear and need to be cleaned and then realigned which might take an hour for each one. As a young man I was totally amazed and fascinated by this technology. As an old man I can't believe that it really worked at all. But it did, at least some of the time.
Back in the day, circa 1957, I was an Electrical Engineering student at the City College of New York. In one of the labs we constructed an Analog Computer using physical components like Motors, Gears, etc. There was absolutely nothing binary/digital involved except weather you passed or failed the course. A couple of years later I worked with a Bendix G15 computer with an optional DDA (Digital Differential Analyzer). The DDA was an analog computer Input and output were analog. You can look upon Google. Search for " Bendix G15 computer with dda "
I'd be interested to see a digital computer adopt an analogue component, possibly to be utilized in situations of physics simulation, much like how a GPU is utilized to independently create graphics from the CPU.
So, this was Harold Finch's solution in Person of Interest. His ability to create an autonomous observant AI to identify dangerous behavior was the result of Rosenblat, and he did it 15 or 20 years before anyone else would even attempt to do so. Missed an opportunity to mention him in PoI. Neumann was mathematics, Turing is the father of modern computing, but Rosenblat was a maverick on the nature of neural networks.
Oh my god thank you for bringing this up. Ever since i watched that show I have been set on learning everything and anything about AI. It has inspired me set me on my current course in Comp Science.
Fantastic video and I learnt a lot being a biologist. Small correction, neurons (the real ones) are indeed analog in the sense that they can tweak their output and fire, fire more, fire less just like an analog computers. This happens by a combination of changes in neurotransmitters, their dumping at the synapses and adding neuropeptides that can change ‘gain’ from the neural networks.
Exactly! The analog behavior of neurons is closely modeled by the analog current/voltage exhibited by the tweaked transistor cells, as so well demonstrated and visualized in the video.
Neurons have a lot going on inside, and things are happening outside, that affect what they do and when they do it. It amazes me that computer neural networks work at all, let alone as well as they do.
@@vyor8837 Not a load of rubbish, just amazingly primitive compared to what it's trying to mimic. And also use specific. And not really capable of learning new kinds of tasks. ;-)
Interesting video, but felt a little bit too hyped up for me ^^ The discussed challenge appears to be a highly specific application; matrix multiplication. The solution shown here was an analog ASIC (application-specific integrated circuit), which is a type of chip we've been making for over half a century. Once a tasks becomes both computationally expensive and very specific, the fastest method has always been to make a specific chip for it. Nor is analog multiplication anything new, I remember being taught the little analog multiplier circuit with the Gilbert cell over a decade ago.
I believe Derek found a little niche to focus on since he did the video on the ancient Greek analogue computer, which had an almost identical conclusion
I feel he is on to something here ... maybe the real benefit is that you don't have to make all these specific chips, because in principle one fairly big analog one could do everything you threw at it. But it feels a bit scary to me too, because you are getting closer to biological systems.
Yeah, the reduction in power consumption I'd imagine is mostly due to it being an ASIC, not being analog. There are quite a few digital AI inference ASICs coming onto the market as well - I'm curious to see which ones will reign supreme
Hi: Unfortunately I can not speak English, so I have to use the program I have installed on my phone for translation, except that I can speak Persian, I do not speak any language other than Persian and no other language. I can not speak, so if I said words or sentences and the special program mistranslated into your language, I apologize in advance to your esteemed father, from the bottom of my heart and from the bottom of my heart, that the letters and words Call me to the end and call me a person orDo not consider me a rude person and do not consider me a rude person and judge me correctly and after reading my writings, just put yourself in my place for a few minutes and imagine yourself in my place, maybe if you have an awakened conscience and There was love and affection in your hearts, of course, if you did not have pride and arrogance, understand me and give me the right, and again, maybe, maybe, maybe you did something and you did it for me and you took me from You saved this great tribulation that I hope will not happen to any living thing, of courseI do not very much hope that anyone will take my hand and save me from this misfortune, but I am writing so that I can at least be comfortable in front of my conscience and not blame myself later if I cry out for this cruelty, oppression, and captivity that has fallen on me. All the doors are closed to me, even the eyes of God are blind to see the oppression that is being done to me, and the ears of God are blind to hearing the cries of my constant cries, my midnight cries, my daily sufferings, my daily prayers and the jurists and The cries of my every moment from this oppression, oppression and cruelty that from the beginning of my life, fromThe first events of my life that I remember are deaf. So how can I hope for others when God has done nothing for me and trapped me in a cruel, cruel, cruel, and oppressive family? I am Morteza, I am from Iran, I am blind, I am 34 years old, I am unemployed because in our country, Iran, there is no work for healthy people, let alone disabled people like me. I live with my family in a small town in Iran. Of course, in appearance, they are my family, but in reality, they see me as their own enemy, and with me, who am their child, only bI was born blind, and I was not to blame for this, but my parents, because they were illiterate, considered me as a disgrace only to myself for my blindness and disability, and were always tortured as severely as possible. They beat me to the point where they threatened me with a knife. They put me to sleep and put the knife on my neck, and I was terrified and scared. Get up and until fullThe thin skin of my hand burned, they would not let me go. And they tortured me many, many times, to the point that my brothers, with the support of my parents, tortured me in front of my parents' indifferent eyes, and told me that I was blind. You are and you should be tortured to the extent that they created the belief in me and in my mind that anyone who is blind or disabled should be tortured because it is a disgrace to the family and society. And unfortunately, their torture is still going on, and only a kind of tortureThey do not consider it for him, so how can I complain to someone who considers my law as his property, of course, the current laws in Iran. Unfortunately, the government does not have a place for people like me to go and live. I really have no choice, either I have to commit suicide or I have to stay in the same house that my family has turned into hell for me, under the severe mental and physical torture that my family inflicts on me. And more than ever with a horrible gradual death that tormentedI want to stay. So if you still have a little mercy, fairness, conscience, compassion, love, humanity, knowledge, ideology, humanity and altruism, help me, hold my hand, reach my cry. If you are in contact with institutions and organizations affiliated with the International Committee of the Red Cross, or if any of you are a member of human rights organizations or the International Committee of the Red Cross or Red Cross organizations in free and pro-human rights countries, listen to my writings, my voice and my cry. Come on, maybe they're a little fair and think of me and a way toSave me from the clutches of ostensibly wolves, of course, if human rights institutions and organizations and the Red Cross actually support human rights, if you are a citizen of a free country or a citizen of a free country like the United States, Canada, Australia, Great Britain, Norway, You have Sweden, Denmark, Finland, and the European Union. Send me invitations, arrange for me to leave this house, which is worse than hell, so that maybe I too can taste freedom. If you have capital, you are rich, you have money, at least help meLeave the messengers, so that all the photos on my personal pages in the messengers and social networks are from 8 years ago, that is, for 8 years ago and 5 years ago. I have not even traveled for many years, because our city is a small city. It is sad for a disabled person to leave home, it is sad for a disabled person, the streets of our city are not adapted so that a disabled person can easily be at least a little out of home. And I'm really depressed at home, especially with this family that is always torturingAre. Of course, I do not care if I travel or not, because my problems are so great that not traveling travels much more than my other or other problems. In your opinion, can a person who eats only one meal in 48 hours, which is a good meal, even a moderate meal, but also any garbage he gets, take pictures, travel or not, and other things? Slowly, someone who breaks his heart at every moment, of course, the work of my heart is no longer broken and my heart is on fire andIt is burned, and this fire is getting more and more hot and burning, and it is burning and ashes my whole being. So I hope that if you do not hear the sound of my heart breaking, at least be fair and feel the smell of my heart burning, away from arrogance, arrogance, misguidedness, ethnicity, race, skin color, shape and appearance for the sake of humanity and for Humanity and honor that you have, just imagine yourself in my place for a few minutes, see yourself in my place and think, if far away from you, God forbid, you would be in my place.Did you ??? Were you satisfied that someone was making fun of you or disrespecting you, or did you laugh at you in response to your message and pain and heart ?, so my dear friends and those who wrote my writings to you Please, if you do not want to do anything for me, at least do not make fun of me, do not laugh, and if you do not want to help extinguish the fire inside me that burns my heart, at least do not spill oil and gasoline with ill-considered language, words and expressions, and this is far from knowledge, honor, It is humanity and family originality. If you want to give me anyPlease help me do not send me a message on UA-cam because I can not transfer messages from UA-cam to the translation program that I have installed on my phone, and as a result, because I am not fluent in any language other than Persian, I can not understand the meaning of messages. Let me know what you sent me on UA-cam. If you wanted and could and the possibility of helping me in any way, whether financially, materially or spiritually, please contact me on WhatsApp or Telegram, because in WhatsApp and Telegram I can easily send messages.00989358205228 00989182804420 00989358205228 00989182804420 00989358205228 00989182804420 00989358205228 😞😞😞
This is missing any mention to the other big alternative: Photonics. Startups like Lightmatter have shown that this is another very potent alternative. And I believe its benefits, of not being limited by electronic bandwith/losses and the ability to use one circuit to calculate the same calculation multiple times at the same time by using multiple colors/wavelengths is just astonishing. It was also left out that a big problem of these systems is the bottleneck in the conversion from general compute to these analog domains.
Probably because it is also an emerging system. But also because photons are used like electrons as the actor, a new actor, while the video is explaining two fundamentally different ways to act.
I was waiting for him to get to photonics, too. It's a HUGE opportunity to crazy amounts of parallel processing. And then there's quantum computing white whale, too...
Some years ago as I was finishing up on my computer science education, I had to do a project for my finals (this is not university, btw, just a secondary kind of route available to me in my country) and I had always been fascinated by artificial intelligence and neural networks, so I picked to do something in that realm. I never had anything to do with AI prior to that, so my knowledge was quite shallow when it came to how artificial neural networks actually work. I had been programming for quite some time at that point so I had my basic tools in place, so to say, but I really didn't know where to start from there when it came to what I picked as a "task" I wanted to solve by this method. The task I picked was driving a little simulated 2D-car along a randomly generated road without any human input and unfortunately, the language I was most proficient in at the time was Java, so I tried to implement it in Java. When I first started the project, I read a lot about neural networks and even included some scientific papers on it in my reading, but not stemming from any kind of scientific background, I really struggled to understand the deeper concepts presented in those, which eventually lead me to just abandon reading up on the topic and just approach it with the general concept of "How could/would it work?". Looking back, I think I have never in my life experienced a time when I was more challenged by a task I set out to complete and also never before I was thinking that much about how our brains actually process information, as that is the key idea behind it. Yet, slowly but surely, I ended up with a pretty similar concept to what most neural networks do, even though probably far less efficient than using something that was already available like TensorFlow. Where am I going with this story? Well, after a while it actually worked, the little network I created "learned" how to drive the car along the road after countless iterations of doing it wrong and to this day, it just absolutely blew my mind on so many levels that this was possible. Even more so considering that all of these complex processes take place in our brains every microsecond we exist with much more proficiency. I absolutely loved that project from beginning to end and if not for the rest of that educational path I took and years of aimlessly programming for fun, it just made me know that I truly love the entire concept of using programming to recreate something in nature. Kind of an pointless anecdote, sorry.
No pointless at all! Many of us are developers, and that means working and studying with pet projects. From my point of view, the way to learn is developing from scratch, and in parallel learning already built up libraries or dedicated software for it (I'm playing with a bunch o neurons made from scratch, and opencv). Cheers!
That analog neural network was really interesting. But to me it's still essentially digital, i.e. discrete. In a normal digital solution you might have 16 possible values for the weights which would be encoded as 4 bits and would need to undergo addition/multiplication. But in the "analog" solution you encode the weights by setting one of 16 distinct voltage levels. The available voltage levels are quantised, not continuous so it's still a discrete system. It's great that you can do addition by just summing currents and multiplication by changing resistance. But you can even do this with binary: AND gates are multipliers and OR gates are adders if you only have 1 bit of data (1 OR 1 gives an overflow condition but the "analog" design will need enough voltage levels to avoid overflow also e.g. 7 + 13 would give the answer of 16 if this was the highest voltage level). I'd say it's still digital but it's not binary. It's multi-level logic.
quantum computers (one that use physical q-bits) are only hypothetical, but people talk as if they already exist in reality. They don't, there is not a single, fully functional quantum computer on the planet, and there might never be.
@@BreaksFast They can and do exist, albeit with very limited qubit counts. The first experimental demonstration of one was in 1998. D-Wave Systems are selling computers with 2048+ qubits right now.
Analog computers are actually a hardwired set of circuits “programmed” for a particular task. They excel is massive parallism, og true real time performance. In additional to analog circuits built using transistor or tubes, optical devices such as prisms (or rainbows) do real time spectrum analysis at light frequencies. And have real time color displays. To duplicate the performance of an optical prism at those frequencies using digital circuitry, would require massive arrays of digital hardware multiplier/accumulators. I did the calculation once in the mid 1990s and at that time it would require about 600 MW power. Early spectrum analysers developed for military applications, took audio or radio waves, upconverted them to light, and used a prism to make the spectrum analysis.
If need be, could analog computers be made to go digital temporarily? If so, it would mean that they can perform accurately for a time and then go back to analog for complexity.
@@LeTtRrZ There are tons of studies on reconfigurable architectures, and the theory of how good a digital computer can be at representing analog behaviors (so, I reckon, the opposite of what you were conjecturing) is well known, as are the implied constraints, but the matter remains about what bricks set should be part of the reconfigurable architectural fraction, and in which abundance each. As soon as you decide the number of components available, and the maximum degree of connection reconfigurability, you define a limit on what you can represent in a given amount of time. As a passage in the video suggests, there are studies about the utilization of architectures with digital boundaries between analog slices, but error correction possibility is very often, I'd dare to say always, a consequence of what you know you want to represent, simply because if you don't know what you're representing, you cannot tell whether you're doing right or wrong... At best you may exploit some underneath characteristics of the representational space: e.g.: if you know that your values should fall on one element of a grid you may correct an analog result choosing the nearest grid element, which means, somehow, re-digitalizing the result... But knowing what you're representing poses a constraint on the freedom of the intermediate representations...
Funny, I always tought of the positronic robots of Asimov to be analog computing, but as a programmer, it was difficult to understand how to work with analog instead of binary, but this video makes a lot of sense, and I can see how the combination of voltage and frequency can influence the result and the combinatory power of multiple inputs can, with each weighting differently in the scale, determine the final result. I only understand a glympse, I know, but my imagination and this video allowed to see how it can relate to our brain neural network. Amazing!!
@@Mashimara I am not 100% sure, but the quantum computing seems a bit like analogue of the analogue computer the similar principals, but the scale is also much smaller than the analogue computer - the analogue computer is using thousands - millions of atoms to represent the value - the quntum one - operating on the single atom's, electrons, photons and other particles to represent the value, and perhaps also the quntum c use engagement, which I am think no one is quite sure how it works so they explain it using the nonsense ideas, like the communication quicker than light speed. They try to use the paradoxes to explain such possiblity, but they still nothing explaining still breaking the basic phisics law: nothing is quicker than light never ever. But I am not sure 😃
@@aoeu256 no analog doesnt. He was describing our implementations of analog. There is no inherent theoretical error rate in analog systems..they are in fact theoretically perfectly accurate while digotal cant be. An analog circuit could theoretically carry the Precise value of pi. A digital one cant
I used this content and visuals for my Electronics Engineering final year technical seminar. I loved the content, and the way it's put together. Thanks for choosing the most interesting stuff to put in my feed.
When I was an engineering student in the 1970's, courses on Analog Computers was required after a first course in linear differential equations. (using slide rules was allowed in class, but not electronic calculators). In your description of analog computers compared to digital computers, we used operational amplifiers to create integrators and differentiators, so there are transistors (tubes before bipolar transistors) in analog computers not just resistors and capacitors only.
@africancheez Engineering is a great profession. Along with Analog Computers we had to take classes with Digital Computers. As a student, I used the HP1100 (CPU , paper tape reader, 1MB hard disk), IBM370 (teletype and punch cards), and PDP11 (punch cards) to learn Fortran IV, Pascal, and Basic. My professors were experimenting with the TRS80, Timex, and Apple 1 personal computers (most arrived as kits) as a hobby (top speed was 1MHz...1977 timeframe). In the computer lab, there was a desk top PC made by Commodore (commodore PET with a built-in cassette tape for mass data storage), but only Grad Students were allowed to use it. It was good to know the difference between solving certain problems using a program on a digital computer versus solving the same problem using a far simpler analog computer. good luck to ya.
I can only speak for my associates degree program but we still do that. This past semester we used IC opamps to make integrator and differentiator circuits. They were some fun labs. I love seeing some of the signal art people make on oscopes.
@africancheez "we have had the technology for a long time, but the world just wasn't ready for it yet." It was never just a technology problem. Do you want your data in someone else's warehouse? Very often, no. Actually I believe it is never optimum since you lose some 4th amendment rights regarding search and seizure. On the other hand, a small company that cannot justify the expensive and reliable talent needed to guard company secrets has no good choices and might as well go "in the cloud" but even there dividing up the cloud (some to Amazon, some to Microsoft) would prevent or reduce complete and total disaster.
When I was in the Navy I worked on the Fresnel Lens Optical Landing System. There was no 1% error. It was .005 VDC tolerance over a minimum 5 VDC base. The computers had a lot of math to solve to target the hook touch down point for each aircraft. It was completely analog and op amp driven and has been around for over half a century. I've witnessed many many old analog machines in manufacturing since then. Analog technology isn't new technology, it's forgotten technology pushed aside by the digital technologies. I'm happy to see it hasn't completely died.
When I was in institute back in 2016 I was thinking of this specific “gates” as well as undergraduate. I knew someone was already implementing it but still missing the time I could be part of innovations. What a genius way of reimplementing circuits for neural networks. Maybe that’s what the FPGA is future is - neural networks
I was a finalist in the state science fair competition back in the 4th grade. So around 1977. My project was a board about 2 foot by three foot, full of 2 and three position switches and colored lights. It was a logic board that could solve various types of equations. Pretty cool in a time when almost no one had ever touched an actual computer. In the end I learned absolutely nothing about computers. But I learned to solder really, really well from it. Moral of the story, if you can't learn to code, at least learn to solder. :)
Never too late to learn :) Start with arduinos, they're amazing little chips that can do wonders. Even something simple like a temperature sensor might be fun.
It sounds like it could make GPUs more power efficient. GPUs are starting to use AI to make certain computations more accurate, so maybe an analog chip on our GPUs could handle that instead.
@@BlueDrew10 I agree, but the first major bottleneck is, like he said in the video, the massive power requirement to teach each AI, each needing 3 household's combined annual energy usage, mass production seems inefficent.
This just goes to show that no knowledge is useless. When I was in my final year of my undergraduate degree ( Electrical Engineering) I took a course on analog computers and the general consensus was that this field was obsolete. That year was the last year that the course was taught as it was phased out in the new curriculum.
I remember my friend once made a completely analog line follower robot. He implemented a PID controllers using Opamps and trimming the parameters with 3 variable resistors. It actually worked quite well!
That sounds like the micro-mouse project. In the UK at schools in the late 90's it was a challenge for electronics students (electives yr10-12). Really fun finding solutions to the problem of navigating a maze either on paper (a line) in a 3d maze or both.
Brilliant to use flash memory technology for analog computing! When I think of analog computing I think of using operational amplifiers, resistors and capacitors to do the calculations. These are very limited in bandwidth, so I thought optical processing would take off. Lightmatter has produced a light processor that can do large matrix multiplications and additions. I didn't expect flash memory technology would be used. Very creative, I think we are going to see a mix of different technologies in the future, perhaps optical processors for training of NN and analog processors for execution of NN and digital for the rest. I'm sure digital will never be replaced since there are many problems where we need an exact solution and where a program needs to run for a long time without errors. Electronics will also never disappear since electricity is so fundamental, energy production and storage is electrical, sensors and actuators are often electrical, and we have powerful digital and analog processors electrical.
As I sit here working on my NLP research, I happened to get this video suggested to me. Thank you for covering this field, and doing such a great job explaining the history of neural networks. I love your videos, and thank you for the breadth of knowledge and experience you impart on your viewers. Dr. Muller, you are an amazing champion and advocate for the STEM field, and I think what you do with your channel is exemplary. Thank you!
What a great video. This is the first time I clearly understand how we got to the deep neuronal algorithm because most of the time we learn about what it is and how it works, but here I see what were the fundamental ideas that lead to it. Besides, going back to analog but with newer technologies is fascinating, like if we discover our world again. Digital computing was a revolution and there is a new revolution happening here, certainly using both principles (with AD or DA convertors in between). Even if lots of researches go to the quantic domain but which is unmatured yet, the analog computation already allows a lot. Also, as an engineer in the space domain, I see here an extension of what we're doing all the time: acquisitions values are often provided in the form of a voltage with a calibration curve allowing the interpretation. So we commonly use analogical signals and computations. We even use matricial analog computations, but most of the time for a 2x2 matrix (example: you have 2 switches giving theirs positions through resistance values, if the two switches have different scales of resistance, then a single signal holds the information about the 2 positions). So we do use analog signals, able to transmit a more complex information that what digital ones do. But here it goes a great step forward because it applies to matrix computing. This application is going to revolute computing science and that may happen quite soon because all the fundamental science and technology is already known.
FPGAs (Field Programmable Gate Arrays) are also becoming bigger and more powerful... ASICs are great, but being able to reprogram the analog computer combines advantages of analog and digital. It's nice to be able to update machine learning models without requiring new hardware for every iteration.
1) Small FPGAs maybe be affordable, but good luck doing AI on that. Bigger FPGAs (eg the one our company had) costed well over 10k USD. In other words, "bigger and more powerful" is not the most important issue. 2) How many people know how to program an FPGA? C++/python programmers are a dime a dozen, but how many know Verilog? Not saying they're useless - on the contrary, doing AI on FPGAs is an active field of research.
@@pierrecurie it might not be feasible for all circumstances yet, but there have been improvements in size and also cost in recent years. Also, there exist machine learning applications that do not need thousands of neurons (as the Imagenet stuff described in the video does). For my bachelor thesis, I did some research on anomaly detection using auto encoders and lower dimensional sensor data, those models could fit on affordable FPGAs. Sure, a microcontroller can handle that too, but lower energy needs and less heat output are still advantages in some embedded use cases...
When I was first reading about neural nets when they hit the main stage, I had just completed several analogue electronics courses. It struck me as a natural marriage of the two realms. Analogue processing with digital memory would be ideal for human centric applications.
You are quite right to point out that digital memory would be a necessity, as encoding information in an analogue way is totally inefficient. The problem, though, is it's not like memory is just a thing that stores information when it's not in use. Memory is also a key component of any active process. So to suggest that you do analogue processing with digital memory, would essentially mean you'd need to be spending a huge amount of effort converting between the two during even a single process routine. The other solution, to keep active memory as only analogue, isn't much better, because then your digital memory is essentially bottlenecked by your active analogue memory.
@@MassDefibrillator if I understood correctly part of why analog is better for processing is that they don't have to deal with accessing the stored data multiple times just to compute something. Instead it just computes it and spit out the result. Using digital memory would thus be the equivalent of having a piece of paper hold the value (digital memory) while we (analog processor) do the actual work. What speed you lose converting to and from digital as needed is more than made up by how fast analog can process data.
@@Temperans The more I think about it, the more I think it's not really possible. Analogue accesses the data just as much. The only difference is that the data is stored in analogue during the process, so you are limited by analogues terribly inefficiency in information representation. So like, in analogue, the number 20 is stored as a voltage during processing. Let's assume you need to represent the numbers between 0 and 256, the 8 bit number system. In order to do this, you voltage meter would need to distinguish at the millivolt level; you can't do it at the volt level, because then you'd need to use between 0 and 256 volts. So Let's assume your voltage reader can distinguish at the millivolt level. It needs to be able to do that accurately, it can't confuse 3 millivolt with 4, otherwise it's useless. To get a voltage reader capable of doing that, you are in need of a much more expensive and complex bit of tech than the equivalent digital processor would need. I noticed this in the video. When he talked about how simple analogue maths was, he was talking about it as if humans could read voltage levels themselves. He neglected to mention that you would need a much more sophisticated voltage reader in an analogue system than you need in a digital one. So in reality, analogue is just offloading the processing complexity the to voltage readers. As far as I can tell, it's basically all nonsense hype. For these same reasons, it's very likely that the brain is a digital system, not an analogue one. We know for a fact that DNA uses digital encoding and processing (or more specifically, base 4 encoding); so it's not such a stretch to believe the brain does too.
Veritasium should honestly get a noble prize in education or something! These videos are incredibly insightful and inspiring. I am a software engineer & this has motivated me to try out some machine learning out of excitement and curiosity! Thanks, Derek
Going between digital and analog in the case of Mythic actually makes it more like neurons in a way, since they operate with both chemical and electrical parts.
Hi: Unfortunately I can not speak English, so I have to use the program I have installed on my phone for translation, except that I can speak Persian, I do not speak any language other than Persian and no other language. I can not speak, so if I said words or sentences and the special program mistranslated into your language, I apologize in advance to your esteemed father, from the bottom of my heart and from the bottom of my heart, that the letters and words Call me to the end and call me a person orDo not consider me a rude person and do not consider me a rude person and judge me correctly and after reading my writings, just put yourself in my place for a few minutes and imagine yourself in my place, maybe if you have an awakened conscience and There was love and affection in your hearts, of course, if you did not have pride and arrogance, understand me and give me the right, and again, maybe, maybe, maybe you did something and you did it for me and you took me from You saved this great tribulation that I hope will not happen to any living thing, of courseI do not very much hope that anyone will take my hand and save me from this misfortune, but I am writing so that I can at least be comfortable in front of my conscience and not blame myself later if I cry out for this cruelty, oppression, and captivity that has fallen on me. All the doors are closed to me, even the eyes of God are blind to see the oppression that is being done to me, and the ears of God are blind to hearing the cries of my constant cries, my midnight cries, my daily sufferings, my daily prayers and the jurists and The cries of my every moment from this oppression, oppression and cruelty that from the beginning of my life, fromThe first events of my life that I remember are deaf. So how can I hope for others when God has done nothing for me and trapped me in a cruel, cruel, cruel, and oppressive family? I am Morteza, I am from Iran, I am blind, I am 34 years old, I am unemployed because in our country, Iran, there is no work for healthy people, let alone disabled people like me. I live with my family in a small town in Iran. Of course, in appearance, they are my family, but in reality, they see me as their own enemy, and with me, who am their child, only bI was born blind, and I was not to blame for this, but my parents, because they were illiterate, considered me as a disgrace only to myself for my blindness and disability, and were always tortured as severely as possible. They beat me to the point where they threatened me with a knife. They put me to sleep and put the knife on my neck, and I was terrified and scared. Get up and until fullThe thin skin of my hand burned, they would not let me go. And they tortured me many, many times, to the point that my brothers, with the support of my parents, tortured me in front of my parents' indifferent eyes, and told me that I was blind. You are and you should be tortured to the extent that they created the belief in me and in my mind that anyone who is blind or disabled should be tortured because it is a disgrace to the family and society. And unfortunately, their torture is still going on, and only a kind of tortureThey do not consider it for him, so how can I complain to someone who considers my law as his property, of course, the current laws in Iran. Unfortunately, the government does not have a place for people like me to go and live. I really have no choice, either I have to commit suicide or I have to stay in the same house that my family has turned into hell for me, under the severe mental and physical torture that my family inflicts on me. And more than ever with a horrible gradual death that tormentedI want to stay. So if you still have a little mercy, fairness, conscience, compassion, love, humanity, knowledge, ideology, humanity and altruism, help me, hold my hand, reach my cry. If you are in contact with institutions and organizations affiliated with the International Committee of the Red Cross, or if any of you are a member of human rights organizations or the International Committee of the Red Cross or Red Cross organizations in free and pro-human rights countries, listen to my writings, my voice and my cry. Come on, maybe they're a little fair and think of me and a way toSave me from the clutches of ostensibly wolves, of course, if human rights institutions and organizations and the Red Cross actually support human rights, if you are a citizen of a free country or a citizen of a free country like the United States, Canada, Australia, Great Britain, Norway, You have Sweden, Denmark, Finland, and the European Union. Send me invitations, arrange for me to leave this house, which is worse than hell, so that maybe I too can taste freedom. If you have capital, you are rich, you have money, at least help meLeave the messengers, so that all the photos on my personal pages in the messengers and social networks are from 8 years ago, that is, for 8 years ago and 5 years ago. I have not even traveled for many years, because our city is a small city. It is sad for a disabled person to leave home, it is sad for a disabled person, the streets of our city are not adapted so that a disabled person can easily be at least a little out of home. And I'm really depressed at home, especially with this family that is always torturingAre. Of course, I do not care if I travel or not, because my problems are so great that not traveling travels much more than my other or other problems. In your opinion, can a person who eats only one meal in 48 hours, which is a good meal, even a moderate meal, but also any garbage he gets, take pictures, travel or not, and other things? Slowly, someone who breaks his heart at every moment, of course, the work of my heart is no longer broken and my heart is on fire andIt is burned, and this fire is getting more and more hot and burning, and it is burning and ashes my whole being. So I hope that if you do not hear the sound of my heart breaking, at least be fair and feel the smell of my heart burning, away from arrogance, arrogance, misguidedness, ethnicity, race, skin color, shape and appearance for the sake of humanity and for Humanity and honor that you have, just imagine yourself in my place for a few minutes, see yourself in my place and think, if far away from you, God forbid, you would be in my place.Did you ??? Were you satisfied that someone was making fun of you or disrespecting you, or did you laugh at you in response to your message and pain and heart ?, so my dear friends and those who wrote my writings to you Please, if you do not want to do anything for me, at least do not make fun of me, do not laugh, and if you do not want to help extinguish the fire inside me that burns my heart, at least do not spill oil and gasoline with ill-considered language, words and expressions, and this is far from knowledge, honor, It is humanity and family originality. If you want to give me anyPlease help me do not send me a message on UA-cam because I can not transfer messages from UA-cam to the translation program that I have installed on my phone, and as a result, because I am not fluent in any language other than Persian, I can not understand the meaning of messages. Let me know what you sent me on UA-cam. If you wanted and could and the possibility of helping me in any way, whether financially, materially or spiritually, please contact me on WhatsApp or Telegram, because in WhatsApp and Telegram I can easily send messages.00989358205228 00989182804420 00989358205228 00989182804420 00989358205228 00989182804420 00989358205228 😞😞😞
@@rr.studiosMy dear friend, I can not reply to messages on UA-cam because I am blind and it is very difficult for me to reply to messages on UA-cam. I can not speak English. I can not speak English and a program that translates English into Persian. I installed the translation on my phone and posted the messages on UA-camIt does not translate for me, so please come to WhatsApp to translate your message for me on WhatsApp. Thank you very much. 😞😧😟
For performing heavy mathematical operations on weak microprocessors analog computation comes in very handy as well! A great trick to do integration (and differentiation) on an Arduino is to use an opamp with a capacitor and a couple resistors to build an integrating (or differentiating) amplifier. With the ADCs and DACs readily available in that chip (same for PIC or other low clockrate UCs) it takes very little resources to get it going :-)
When I started thinking about artificial neural nets, I just assumed they would really only happen on specialized analog computers in the future. Then google and others along with more powerful digital computers made it work pretty darn great. I love being in this time of history, watching so much science fiction slowly become reality.
I went through engineering school in the 1970s, and they still taught us to program analog computers. It was a great way to learn the ins and outs of classical control theory. It was a holdover from the 1950s art of aircraft control system design, which NASA had perfected, then replaced with digital control system design. But during the transition from all analog to pure digital, NASA (specifically the Dryden [now Armstrong] Flight Research Center) made use of hybrid computers: combining the features of analog and digital computers. I've often wondered about whether this pairing would in some way allow us to build computers capable of solving the Navier-Stokes equations at fine enough mesh sizes to really tackle the problem of turbulence. I was blown away by the computation rate of that single, 3 watt chip. One thing analog computers don't do is differentiation - solving "differential equations" on an analog computer actually consists of casting the equations in integral form, then solving them. Trying to take a derivative on an analog computer, while possible from an electrical schematic standpoint, winds up amplifying noise so rapidly that the solution blows up. If digital circuits did the differentiating, that problem might be eliminated (especially with the breathtaking performance of digital filters in suppressing circuit noise). The chip design for something like that would be amazing!
Michael wrote, _"One thing analog computers don't do is differentiation..."_ Actually, it is very easy to do differentiation and integration with analog circuitry, using op-amps. If you make a current-follower op-amp circuit, and you put a capacitor instead of a resistor in the input path, you'll get a derivative out. If, instead, you put a capacitor in the feedback path, you'll get an integrated output. Less obviously, you can also do multiplication and division "the slide rule way" -- i.e., by adding or subtracting logarithms, and then using exponentiation (antilog) on the sum or difference. Current-follower op-amp circuits make it simple to add currents: just have several input paths, all connected (through resistors) to the op-amp's negative input: the output voltage is proportional to the negative sum of the input currents. To subtract a current, you simply insert an extra stage (an inverter stage) on one of the inputs. To multiply or divide with analog op-amp circuits, you must first take the logarithm. That is actually simple, because the current/voltage response curve of a forward-biased PN junction is an almost perfect exponential curve. So to take a logarithm with an analog op-amp circuit, you simply use a diode or transistor instead of a resistor in the feedback path. Or, for an exponential (antilog) function, you use a diode or transistor (instead of a resistor) in the input path. So to multiply and divide voltages, you first take the logarithms, then you sum them (or invert and sum, for division), then exponentiate. A good source for information about these circuits is the Malmstadt - Enke - Crouch "instrumentation for scientists" series of books.
I went to school for ME, which meant taking the intro to EE course. The instructor in this course actually gave us some final exam problems that were exactly as you described - write out the circuit which models this differential equation (we used op amps like David Burton mentioned). Most of it went RIGHT over my head being an ME student and having not even taken differential equations at that point, but this video was still very interesting to start to connect the applications of such a thing. This way of thinking isn't dead in universities yet it seems.
> combining the features of analog and digital computers. I've often wondered about whether this pairing would in some way allow us to build computers capable of solving the Navier-Stokes equations at fine enough mesh sizes to really tackle the problem of turbulence Like another commenter wrote, it is possible to take derivatives and integrals on analog computers. I don't think that it's necessary, however, since oftentimes with finite differencing methods or finite element methods (some common numerical techniques to solve the Navier Stokes Eqns), we find the analytical forms of derivatives before computing anything, or we avoid calculating derivatives altogether with fine mesh sizes and timesteps. Afterwards it's a matter of matrix arithmetic to arrive at the numerical solution to the problem. Certainly analog computers may do these operations much more quickly than digital computers, but I think the main challenge with solving PDEs over some domain is that we need to somehow store the solution data across the entire mesh, across all timesteps. It's easier to do this all digitally since it's easier to interface with the RAM and the storage on your computer than it is to devise a scheme for a specialized piece of hardware to do so. Only when the computations are truly, truly expensive, as in the case of machine learning/neural nets, do the benefits outweigh the costs. But, who knows? Maybe someone clever enough will figure out a way!
differentiation also does not work in digital. your source data are taken from ADC, with noise. in the simplest 2nd order case d(x(t)) = [-1 2 1]'*[x(t-1) x(t) x(t+1)]; in reality, you have to apply steep high-pass filters together with differentiator - which makes the differentiator non-localized at the t point, but dependent on the adjacent taps.
@@mikets42, to deal with noise, you do some sort of curve-fitting, e.g., regression analysis (or local regression analysis, LOESS). If you want the 1st derivative you do linear regression; for the 2nd derivative you do quadratic regression, etc. _Caveat:_ I'm not expert on this. I've written regression analysis code, but not LOESS code.
Processors have always been slapping new modules on to cover different types of problems. The GPU was added to do repetitive operations, receivers of radio signals usually have an analog de-modulator to make practical doing signal processing on them, quantum computers are right around the corner where 12 or so qubits could be connected to your processors in a seperate box via thunderbolt to perform verification and encryption tasks, GPUs now have internal tensor cores to perform the operations you discussed (and even a bit more general) at lower bit depth, and even processes like "multiply then add" have separate modules inside the processor to compute more efficiently then multiply first, then add as separate operations.
Quantum computers are not gonna be beside your computers unless you have a massive shed with space for a state of the art helium/nitrogen cooling system on hand
@@Kyrator88 I'm not completely discounting the possibility of solid state room temperature qubits, but yeah, excitement about personal quantum computers is pretty silly.
A quantum computer is not required for performing quantum-safe encryption/decryption. NIST is very close to standardizing one of many candidates that provides this functionality.
There were computational control systems built for military aircraft that used Fluidics. A blend of analog and digital computers that utilized high pressure oil flowing thru spaces in a stack of very thin metal plates. They performed Boolean operations as well as giving dynamic analog control of hydraulic distribution for control surfaces and the like. They would make an interesting topic for a related video. And they were immune from EMP in the event of a nuclear attack.
I happened to watch this just after playing with a modular (audio) synthesizer. In these, each module is controlled by a voltage, originating from an oscillator, a keyboard, or a "sequencer". The concept that makes a modular synth interesting is, the voltage pattern (waves) output from a module can either be used as an audio signal (if it's in the audio spectrum), or to control another module. In the simplest case, output from a voltage controlled oscillator (VCO) can be routed to a speaker to produce a sound. But it can also be routed to a module that filters a signal in some way, based on the output voltage of its predecessor. Maybe the thing that makes "ambient" music's slowly-shifting textures interesting is that they mimic the neural networks of our brains.
Aot of the actually do! You can even (kind of) help your brain waves to synchronise with the oscillations, it's not by brute force (you have to play along or it doesn't work very well) but it can greatly help for sleeping, learning and related stuff. Reminds me of old hardware synthts, where you had to connect each of the synth parts with cables. But tat gave you an amazing amount of flexibility! BUT, no one cared to write the configurations down... That was te funniest and most awful part at the same time...
The main problem with analog is saturation of current or voltage hence inducing lost of linearity. One way to resolve it as he said is to to reconverte to digital and and do amplification by a good factor
During engineering we had a subject related to analog and digital communication, this device was in the lab where I just used to turn the knobs and alter the waveform, though I never completely understood what was going on. I passed that subject with grace marks😝. Now when you explained it, I feel that I have wasted an opportunity during my graduation, but I also think that students would benefit immensely when they watch this type of content. I wish you made this video 6 years ago…
It is like, you may learn lots of things in college/school it is to probe matters which in later years. A life time is a short period to review and understand all.but you will get time for almost all of the subjects you are interested
"Simple tasks like telling apart cats and dogs." You can find more difficult task but this is already an incredibly complex task expecially when they are images
@@henrypetchfood It's trivial for a human because evolution produced neural circuits capable of solving this very hard problem. Our own minds are the least aware of what they do best.
True, but Derek's point is that in the grand scale of what we'd hope to achieve with analogue computers (in the future), telling apart cats and dogs is a simple expectation - yet it's still hard to do with current technology.
While the description of the fundamental tech here is great, where I feel this video falls down is not recognizing that probably the primary issue with of analog vs. digital is one of economics of fabrication -- and that's not really addressed.
Not a computer person, this is the first ever video I have watched that explains the drawbacks of analog computers other than the size limitations which is the only thing they teach in school. I think using both digital and analog can help tackle hurdles either technology can't do alone.
Another huge drawback is the engineering cost. Even if analog circuitry were as compact as digital, it's much harder to design complex analog circuits than digital. Digital circuits may be more advanced, but they're not more difficult!
As told me many years ago by one of its engineers, the first Concorde models used analog computers. They had many in-flight problems during initial testing because while digital variables might overflow and give unpredictable results, the analog computer boards would fry when variables overflowed resulting in higher then expected voltages and currents. It took them a while to protect every component from this, and stop burning computers in flight. As harsh as it sounds, it was reportedly not a major problem as Concorde was not really dependent on automation for safe flight. As reported by its test pilots, it was actually a pretty docile airplane.
Overflow also isn't difficult to program around on a digital computer. You take a performance hit, but the hit isn't that bad on modern hardware since almost anything besides reading/writing memory is dirt cheap.
2 роки тому
@@Dyllon2012 The absolute hit isn't that bad. Relative hit might still be really bad. Also keep in mind that languages like C or C++ are allowed to do extra crazy stuff when signed integers overflow. They aren't restricted to doing the predictable wrap-around.
One thing of note, our transistors can never be a single atom, it's a conductor wrapped around a resistor, there's just no way to do that with anything less than 3 atoms and they can't do that yet. As lithography gets smaller they find that the design of the transistor itself keeps changing, what worked on 14nm doesn't really work well on 10nm or 7nm, often various components become relatively larger even though overall the design has shrunk. If we had stuck with a lower power usage similar to the air cooled 4x86 then we could make transistors differently, smaller, but we went for clock speed and performance, so the transistors are insanely complex and much, much, much larger than an atom.
Hi Derek, I really enjoy all your videos and content. It's great that you have been putting out quality content for such a long time. Also I love how you dive deep enough to get the complexity and then wrap it up in the end with a memorable comparison. One thing in this video I found ironic is that you have made videos about proper statistical representation and at 12:15 you plot the bar diagram not from zero but start at 20% then progressively drop down to 0%. I found it interesting as at first I didn't notice it but then once I saw it I realized how the representation really made me think it dropped lower than it actually did. In this case it doesn't change the overall message, however it made me more aware of how unassuming such displays of data can change the perception. Thank you again for all your great content. Greetings from Austria where we don't have Kangaroos! 👋
Even within the digital domain, your typical computer is relatively quite slow. The whole point of the clock frequency in a CPU/GPU is to logically separate circuit signals so that each instruction (or segment therof) has time to complete and thus must be set for the slowest instruction/segment. Because of this though, the processor may carry out any combination of any number of its prepared instructions. Thus perform practically any mathematical algorithm given enough time and memory.
First, the whole ending of this video is digital systems. The only thing analog are inputs, such as cameras or lidar. But that goes through an A to D conversion and the system being used to process that data to come up with solutions, in this case for a self driving vehicle, is digital. And slow is a relative thing. A PC can carry out millions of math problems a second and that's not slow. So, slow as compared to what? Analog systems aren't any faster even though there's a continuous output. The same is true for digital systems doing the same work. I know. I worked with military systems and saw both analog and digital firing solutions and I can tell you the digital systems were more accurate.
@@johndoh5182 I think the point being made was that digital systems, using the definition of systems that abstract signals into boolean logic (1 or 0), are fundamentally slower than analog (non-abstracted) systems using equivalent technology due to the fact that the partial states are often useful for more complex calculations which digital systems would have to take extra steps to compute. He gives an example of an analog processor used for AI which uses the same technology as typical digital processors but uses the partial state to represent the multipliers stored in each neuron thus the part on AI driven vehicles. Though with most prediction based machine learning and neural networks, once the system has been trained, the prediction/decision process for any given input requires only about as much processing power as a single training sample thus the car itself should not nessecarily require the specialised processor. PS make that billions of math problems per second. If the IPC is 1 for any given instruction on any given processor, then if that processor is running at 4Ghz, then that is 4 billion mathematical operations per second per processor core (and that's not even counting SIMD).
clock frequency is simply for synchronization not only with cpu but also with different devices like memory, data bus etc. in short to generate ticks. simple instructions run always same amount of ticks, so when you increase frequency you increase performance, but that also generates more heat. its number of ticks that matter, with modern cpus that are rics that emulate cisc platform its all dependent on algorithm of instructions aka cpu microcode, you get better microcode you get more performance, but internaly simple risc instructions are synchronized to cpu architecture. for example simple nop runs one tick cos thats how cpu was designed. you have 40mhz cpu that means cpu can run 40mln nop's in one second, you make it 50mhz you can run 50mln nop's in one second. in short frequency of cpu/gpu is how many times it tells you "i want to do something" thats why we try to organize data, design code that has no collisions or stalling and we dont waste precious ticks. you cant say "has time to complete" cos worst instructions like divide and multiply have numbers of ticks from 20 to 300. depends how big number you have. when cpu executes that instruction cant do other operations like communicate with data bus of your gpu or access memory. thats why next we designed idea of pipelines and next was multicore cpus, so you can run many instructions in parallel. risc cpus even in 1997 had pipelines that could run 7 instructions at same time. now add to that multicore like 32 core cpu. and again we come back to problem of heat, the more cores cpu has, less frequency it can run
this is pretty much the point that veritasium misses. my guy should read some Claude Shannon, its clear he is not an EE degree holder. Digital communication and computation is resilient to noise, and is theoretically lossless, and being slower can be a side effect. Also, digital computers are way easier to design/fab in 2022
Hey Versitasium, you should really look into the hardware used specifically to simulate neurons like the human brain. They have chips that are built on a physical level to simulate individual neurons, and as of right now consumers cannot buy them. It would be so cool if you could get an in depth look into these, the information about them is kind of difficult to find. Artificial intelligence in the traditional sense has neurons that do not even come close to how they work in real life, they should be called artificial function generators. Generating consciousness with things like tensorflow is just very unlikely. Most don't have memory, if they do it's simplified and recursion is also simplified compared to real life neurons. Also I think it would be really cool if you dove into the technicals of artificial consciousness.
@@MGHOoL5 DuckDuckGo for "neuromorphic computing"; University of Manchester's SpiNNaker chip, Intel's Loihi chip, something out of Stanford, are actual implementations. No one knows if General Intelligence and "consciousness" require the detailed neurobiology of the human brain, or whether it only requires a few more breakthroughs in conventional AI like backpropagation plus a few more orders of magnitude increase in the size of digital neural networks.
@@tylerevans1700 Wait really? Is that a thing where you would go to another search engine like DuckDuckGo to find something 'more relevant' than you would find in google?
@@MGHOoL5 an ad blocker is essential to filter out paid search results in Google. However, Google still prioritizes those search results aligned with its business interest: its own services, sites that serve tons of ads using Google's ad tech, $^@! 8-minute videos on UA-cam instead of simple text explanations, showing sites with ads instead of providing the answer right in search results, etc. DuckDuckGo does less of this, But its search algorithm isn't as smart at understanding what you're looking for. I use Firefox which lets you easily choose which search engine to use from the location field.
I've watched a lot of your stuff for years now, but this is the best one by far. Great job of explaining something so complex, difficult, and relevant!
As a young PhD student in Computer Science, your explanation of how neural networks come to be and evolved, and the math behind it, is the cleanest and most accessible that I have come across. As I focus on computer architecture, I came to this video without much expectations of learning anything new, but I am glad I was wrong. Keep up the great work!
From USA?
There’s a reason he has 11, 000, 000, 000 subscribers after all 😏
@@alex.g7317 Wow, that's more than the population of Earth. Where does he find all those subscribers??
@@unstable-horse Mars
@@unstable-horse omg, lol 😂. That was a typo!
I meant 11, 000, 000!
Fun fact, that Graphics Card he’s holding at 18:00 is a Titan xp with an msrp of $1200, he says it draws 100w but it actually draws about 250w, so that tiny chip that only draws 3w is even more impressive
Let's hope these new analog chips can solve our GPU shortage problem 😂
In general when doing machine learning you are only using the CUDA cores of a graphics card so the wattage never gets close to its maximum.
A lot of the processing units are simply not being used, for example shaders and 3D processing units. For example on my GTX 1080 I sit between 60-90w out of 200w when doing Pytorch machine learning. So I think 100w out of a maximum effect of 250w seems reasonable.
you can underclock GPU's, thats what they do in cryptomining to improve their profit margins, depending on the chip they can operate effeciently at a fraction of their nominal power
man, I'm old enough to remember when a $1200 card was considered EX PENS >IVE<
And not...'going price'.
@@AC3handle 1200 wont buy you enough power for a decent DL rig either. An RTX 3090 goes for ~$3000 USD
One of the first things I learned in Electrical Engineering is that transistors are analog. We force them to behave digitally by the way we arrange them to interact with each other. I'm glad there are some people out there that remember that lesson and are bringing back the analog nature of the transistor in the name of efficiency and speed.
At the expense of accuracy?
Well, semi analogue. Don't forget the bias (voltage drop) before you get current amplification.
Also, to say that analogue computers are more power efficient that digital is pretty hard to back up. A $2 microcontroller can run on a few mA for the desired task, then sleep on uA. You'll need at least 5mA for an analogue computer to start with and you can't make it sleep.
@@JKPhotoNZ Great point. And with current Nano transistor technology, That efficiency (along with raw power) is going far beyond what a true analogue system could produce.
Yeah but then you'll never know at which zone is it on?
Because amplification happens differently for different input parameters. And not all transistors from the same batch would perform the same, i.e. it will lack repeatability (as Derek mentioned).
the insoluable (even in theory) problems of analog are noise and signal integrity, which is why he didnt even mention them. This channel has gone to poop honestly.
Analogue was never meant to die; the technology of that time was the limiting factor IMO. It appears like Analogue - Digital hybrid system can do wonders in computing.
I know this is an old comment, but I figured I'd add that as far as physical packaging goes, nothing stops us from putting one of these next to a conventional CPU. Cooling it would be the hard part as the temperature would swing the outputs by introducing noise. Might be better as an M.2 PCIE device.
@@DigitalJedi Its incredibly unlikely this will ever expand into the home. These would likely be built entirely different from traditional computers.
@goldenhate6649 As we've seen they can be build on NAND processes already, which are widely adopted by consumer electronics. The use case provided of low-power wake-word and condition detection seems like a great application if they can find the right product in the consumer space.
That's what he said in the end (not explicitly though)
yeah a hybrid system could definitely be extremely good, because we are the era where parallelization and concurrency matters heavily in computing
Hyped for the future of computing. Analog and digital could work together to make some cool stuff
True AI is going to be the end of us. Why would you want that?
@@teru797 true AI wont be possible for the next 200 years and by then if humanity kept on living how they are we aint gonna survive anyways
@@teru797 it would still take quantum computers to be able to have the memory necessary to run
@@teru797 cool
@@teru797 we are already the end of us
For amusement only: My first day at work was in1966, as a 16 year old trainee technician, in a lab dominated by a thermionic valve analogue computer (or two). These kept us very warm through the winter months and cooked us during the summer. The task was calculation of miss distances of a now-obsolete missile system.
One day I was struggling to set up a resistor/diode network to model the required transfer function, but the output kept drifting. I found the resistors were warming up and changing value. Such are the trials and tribulations of analogue computing....
Thus the temperature coefficient is very important for recent precision devices. And a high accuracy low ppm resistor is expensive, which is one of the reasons why it costs so much for high-end electronics instruments.
I was going to comment that one disadvantage of analog computers is keeping them calibrated. If you want a precise amount of 'voltage' or movement to represent a real-world value, you have to keep it calibrated. Older mechanical ones had wear/tear, electronic ones have issues as well.
Ah!? But what if the resistors were warming up, digitally?
@@stefangriffin2688 you can't have a digital resistor
@@stefangriffin2688 yeah digital signals works with gates - on or off
I see Derek is getting into modular synthesizers! Also, funny to see how the swing in musical instruments to go from analog to digital and back is being mirrored in computing generally.
My first thought when he pulled out the analog computer was "Hey that looks like a modular synth!"
Witness Audio Modeling (search for it on UA-cam).
Music has always been an AI task
As a semi-professional music producer with almost half a decade of working with professional musicians I would agree - and this is mainly because people feel a lack of “soul” in music. Those small human errors that we’ve spent decades trying to get rid of with Autotune, Drum machines, Sequencers, digital synthesiser and digital samplers (the last two CAN create sounds that will always come out the same way as long as the input stays the same - however there are exemptions).
This is probably something the people I know in the music industry refer to as “The generation-rule”, in brief the music today is a result of what our parents and grandparents heard combined with new technologies and pop culture. - If you’re interested in music and maybe want to stay ahead of the game look it up. Some refer to it as the “30 year rule” as well.
@@p1CM AI has no tasks
I had a business analysis course that tried explain the perceptron and I didn't understand anything. I don't have a strong maths background. This video is pure genius. The way you explain ideas is amazing and easy to understand. Thank you so much. This is my favourite channel
My professor always said that the future of computing lies in accelerators -- that is, more efficient chips that do specific tasks like this. He even mentioned analog chips meant to quickly evaluate machine learning models in one of his lectures, back in 2016! It's nice seeing that there's been some real progress.
That's pretty much where things have always been. Using basic building blocks that do specific functions. A linear voltage regulator has the job of maintaining a constant output voltage for a given set of current levels and different input voltages. You can buy an Opamp and using resistors to make a function called a schmitt trigger. Or you might just buy a schmitt trigger from Texas Instruments and put it onto a board with less board space consumed. Or a schmitt trigger might be embedded for free in certain other ICs (intergrated circuit).
The major computing engines I have seen so far have been effectively GPUs, CPUs, and FPGAs. Xilinx &Altera (now Intel) have specialized in making FPGAs. An FPGA's basic internal components are logic elements with flipflops with reset, aysnc reset inputs, 4-input look up table, etc. Cascade these to make larger units like a multiplexer, floating point arthimetic unit, etc. Its programmable so you can effectively emulate a worse performing specialized CPU. A CPU is still more effcient at doing CPU type functions. A GPU does specific stuff as well.
The idea of doing analog computations honestly just sounds like another building block to add into a complex system. Only there simply hasn't been a large enough demand to require the generation of specialized hardware like what was described in this video. That one start-up sounds like its developing a chip that will do a series of very specific functions and will need to be integrated into a large systems to accomlish a specific task.
Well that have sort of always been the case.
I don´t know what was the first accelerators, but one of the fairly early once was the FPU. We nu just take it for granted.
Sprite accelerator was also fairly early.
Then graphics accelerators.
Then video decoder/encoder
Then MMU accelerators
Then 3D accelerators.
Them SIMD accelertors.
Then T&L accelerators
Then physics accelertors
Then raytracing accelerators
Then deep learning accelerators.
ASIC devices have existed for decades...
@@LundBrandon they will soon become obsolete though due to reconfigurable computing devices line FPGAs
@@calculator4482 FPGAs have also been around for decades, plus they draw more power. I'm a computer engineering student right now currently designing a CPU to be synthesized onto an FPGA. I'm not dumb.
started watching this channel when I started high school and now that I'm about to get a phd in mathematical logic, I've grown an even deeper appreciation for the way this channel covers advanced topics. not dumbed down, just clear and accessible. great stuff! (and this totally nerd-sniped me because i've been browsing a few papers on theory of analog computation)
Absolutely. Love the way he covers the concept for everyone. Those who don't know in depth about it can still go away with a sort of basic understanding. And those who do understand it in depth will enjoy discovering new areas of invention that they can further explore.
Looking forward to reading some papers on using analog computing in neural network applications
I'm smart too.
Same here. But i didnt go to high school and mylife became a total mess and i havent graduated whatsoever :)
“nerd-sniped” lmao I feel exactly the same
and here I was thinking I was way ahead of the curve on alternate computing 😂
jokes on me
I am a process control engineer, born in 63. In the 80-ies we used analog computers to calculate natural gas flow - for the oil and gas company. A simple flow computer was around 10 kilos, full of op amps and trimmer pots. It was a nightmare to calibrate it. :)
Op amps with their offset voltages and input bias currents leading to inaccuracy.
Sounds like a nightmare.
Constant recalibration required?
Absolutely.
@@rogerphelps9939 just calculating the uncertainty at each step sounds like a nightmare.
I think Nasa (or was it still Naca at the time?) was able to simulate flight caracteristics with analog circuits too. I'm thrilled to see this tech coming back !
Once I got decency I will forbid using 1 and 0 ....yeah..
AI researcher here, you did a great job on this. For anyone interested the book Perceptrons by Minsky/Papert is a classic with many proofs of limitations and explorations of the limits of the paradigm. It still holds up today and its fascinating to read what scientists were thinking about neural networks during the year of the moon landing!
A Logical Calculus of the Ideas Immanent in Nervous Activity
Great job in the AI subject but... is this analog computing thing really happening? He only goes to one company which no longer exists BTW... I don't know
now we have sentient ai
Mythic still exists and they're funded by Blackrock and lockheed Martin so they won't disappear any time soon (source: the footer of mythic's website)@JoseAntonio-ng5yu
As a 70-year-old boomer my technical education involved building and testing very basic analog devices. Thanks for this video, it helped me to a better understanding of neural networks.
You must be really healthy because you dont even look close to 70! :D
@@magma5267 Thanks for the heads up. I've got a good women, 6 children, 8 grandchildren and a recently placed stent in my heart that keeps me going :)
@@TerryMurrayTalks Good for you my man! I am still 20 and don't know what to do in life :(
Yep your 70 mate
@@vedkorla300 You have plenty of road left to travel, follow what you love, enjoy the journey, don't let the bumps in the road stop you and if you can get a soul mate to share it with you it will all be good.
as a NAND flash engineer that bit about the usage of floating gate transistors as analog computers is interesting. particular because in flash memory there is a thing known as "read disturb", where even low voltages applied to the gate (like during a flash read) to query it's state can eventually cause the state itself to change. you would think it is a binary effect where if it's low enough it would just be a harmless read but no...eventually there will be electron build up in the gate (reading it many times at low voltage has a similar effect to programming it one time at a high voltage). in this particular application the weight would increase over time the more you query it, even though you didn't program it to be that high in the beginning. it's interesting because, it's sort of analogous to false memories in our brains where the more we recall a particular memory, the more inaccurate it could potentially become.
Underrated comment!
Absolutely love this comment! This has been on my mind for at least an hour now, the point you make is intriguing and a bit haunting, thanks for that!
Fkin right on!!
Very interesting.
Underrated comment. Then in Eternals style we will have to reprogram the memories of our servants every now and then.
I’ve been an engineer for 44 years. Great video. I actually worked on analog computers in the 70s when digital processing was still new. Never to this level though. Great job!
🇨🇦🏳🌈
Its amazing how fast all of this is evolving. Looking at this, and comparing it to facial recognition software in simple phone apps we have now really shows how much all of this has influenced what kids and teens easily use today.
I didn't catch the part where he quit talking about analog systems though when he went to the logic systems being used for matrix operations, because that was digital. There may have been analog inputs into the system, but there's an A to D conversion, and everything he showed at the end was strictly digital, so a bit misleading there. Current systems for AI are digital.
@@raijuko yes but i still believe in God . And i am a fan of science experimenting and inventing
This is very well explained video on analog computing. Never could I have thought the topic of analog computing can be put out in 20 minute video with such a phenomenal animation and explanation. Respect your work and effort to make science available to all for free. Respect 🙏
It’s funny, for those of us who are into electronic music production, analog never left! There are lots of great analog synthesizers out there that can produce all kinds of complex waveforms, and some of us have been known to tack an oscilloscope on to modular gear to view and use those waveforms. Even some relatively simple gear can produce complex, “3D” structures with the correct cable patches. A lot of what you described at the beginning is the backbone of synthesis for music, and the same principles obviously apply to mathematical operations.
You can do everything digitally that an analog system can do and more. An example is resampling in order to change the frequency scale of a recording. This can be done in real time using digital methods, not so much for analog methods.
@@rogerphelps9939 Depends on what you’re doing and what’s important to you. Analog synths are great for experimenting with the knobs and patch bay (if available) and learning what exactly each change has on the overall waveforms. They’re really great for learning what exactly you’re doing and what you’re getting as a result. Yeah, there are software synths meant to emulate hardware knobs and a patch bay, but I haven’t found clicking through all that as valuable as plugging and experimenting yourself. That stuff really depends on the person, though.
What doesn’t depend on the person, and is arguably more important, is the fact that aliasing can end up being a problem on digital synths. When you start doing some crazy cross modulation between sources and/or you’re dealing with lots of harmonics, if the processor can’t keep up, your sound will suffer. Same with super high frequencies. Depends on the synth, of course, but analog synths can tend to have a warmer, purer sound to them as well, because you don’t have to emulate all those harmonics. It really comes down to the same arguments being made here regarding analog computers: there’s no processor overhead needed to create some very complex shapes, and to do so perfectly accurately, on analog. I use both types of synths, as lots of people do, and I would never say that one somehow makes the other unnecessary. Hell, there are hybrid synths that give a mostly analog signal path while allowing for, say, a digital sample and hold circuit and the ability to save certain parameters. People make those kinds of things for a reason, you know?
Pythagoras discovered math with music I think right? Really like your comment
@@rogerphelps9939 Difference is that you need billions to trillions of transistors to do digitally what can be done using tens to hundreds of transistors analogously.
@@RAndrewNeal Wrong. The errors arising from component tolerances, noise and temperature dependent offsets make anything complicated pretty much impossible in analog. Transistors in digital processors are extremely cheap. Provided you have good DACs and ADCs you can do anything to whatever precision you need in digital.
This actually helped me a lot to understand how neural networks work in general. For me it was kinda like black magic before. It still is to an extend but to know that moden Neural Networks are kind of more complex multi-layered perceptrons helped a lot.
Yes, indeed. Something I find fascinating are recurrent networks, where some neurons feed back into the network which would allow some information to be saved from one image to the next one. This would allow the AI to process things that change in time, like music and video. For example, if you're tracking a subject with a camera and he turns around, a recurrent AI would be able to continue tracking the subject.
Yea in the end to some of it is kind of voodoo black magic thought i mean they call them black boxes for a reason.
speak in english pls
The video seems to contain some quite biased infos though. The top 5 error rate of humans of 5,1% is of course not accurate. If human beings were that bad, we accordingly would have much, much higher car accident rates. Those kind of inaccurate percentages come from statistics based on captchas. And several conditions do distort the results there.
- human users often don't bring up the needed concentration and attention to solve captchas as they actually could. In fact they are angered by them and often times just click quickly through them.
In traffic while driving a car human beings are much higher on alert and do tremendously less mistakes. Here human beings still beat autonomous drive computers by several orders of magnitudes, measured in car accidents per million driving hours.
- the captchas often do not meet the actual human perception. The captcha images are often unclean, got low resolution and distortions. In the real world humans perceive much higher quality from their surroundings than some crippled captchas. A more clear image increases the recognition dramatically.
It is really crucial in educational videos from whom and from where you take your numbers. Science is not always that objective as we are told, especially when corporations are funding them financially with own interests.
Other than that the video is quite interesting. I also wished though that many common misconceptions would have been cleared up. For example many people still believe, that computers would work like human brains. This is plain nonsense, mostly spread by science fiction. The brain still does pose big mysteries to us especially "the big problem of consciousness".
It is still kind of voodoo or black magic. While the overall working mechanisms are well known and the output can be estimated based on the input, how the neural network exactly reaches to the answer is nearly impossible to inspect because of the sheer amount of variables.
In essence, you feed a black box with something and you can expect it to give you a particular answer with some confidence, but no-one has any damn idea what exactly happens inside the black box.
i recall our control system teacher at the university in the '90s said Space Shuttle flight controls contained analogue computing because it had to process like several thousand sensors' input to produce outputs and digital was just too slow for the job.
He lied.
@@rogerphelps9939 the Space Shuttle did indeed use analog computing for some of its flight control systems. the Space Shuttle used a hybrid digital/analog system for flight controls. Most of the high-level control logic was handled by digital computers, but critical low-level control functions were performed using analog circuits.
The analog components were able to process sensor inputs and produce control outputs much faster - on the order of microseconds - compared to even the fastest digital computers of the era, which took milliseconds. This speed was essential for stability during flight.
I can imagine it too be also just more fault tolerant. Discrete = hard, continous = easy. An overflow in digital can literally crash a whole system. In analog there is more room for error.
Mythic Gate's approach to work:
Kind of reminds me of the copper filament memory planes with ferromagnetic rings representing the bits used as memory for the AGC (Apollo Guidance Computer).
The video on which this memory is based was released by Destin, aka Smarter Every Day, and accompanied his and Linus Sebastian's meeting with Luke Talley, a former IBM employee and, at the time of the Apollo missions, a member of the data analysis teams responsible for the Analysis, evaluation and processing of the telemetry data received from the Apollo instrument ring.
My undergraduate work was actually with a professor who did research in the brain as an analog computer and using neural networks and analog computing as an attempt to achieve super-turing computation. A researcher who's name is worth looking into in all this from my research would be Hava Siegelmann. At the time I understood much less about the problem. My task was essentially to try and prove that analog computation could be modeled with a neural network on a digital computer. Not sure if my comment will be buried or not, but it's an area worth looking into if you're more deeply interested in this problem.
I have never done much/extremely deep research on a topic, but this seems very interesting.
id like your comment to not be burried.
0s and 1s . high and lows. voltage and no voltage (digital representation of numbers); have absolutely nothing to do with brain neurons. it's complete BS . For all we truly know about the brain there could be a near unless amount of information with every firing of a neuron. We have no idea what format conciseness information is in and likely never well as "in time" humans.
@@noblenessdee6151 Engineers : well I will approximate every thing a neuron is saying, to just two numbers. 🙂...
@@AayushPatel-gc3fw words are the limitations, not numbers, all words can be expressed in code, not all humanity can be expressed in words.
As a guy who helps manufacture flash memory I find this really intriguing: especially because flash memory is continuing to scale via 3-D layering, so there’s a lot of potential, especially if you can build that hardware for multiplication into the chip architecture.
You are still strugling with interconects from one side to the other
@@ravener96 With many ML algorithms you can split problems into multiple sub-problems for different networks to handle. I wonder if developing that area of ML would be helpful to make effective analog systems? For an example, in image processing a pixel at the top left of the image has little interaction with a pixel in the bottom right of the image compared to nearby pixels. If you wait to compare them until multiple layers later, it speeds up processing the image and allows for algorithms to become more adept at finding sub-patterns in the image.
@@Zeuskabob1 Depends on what kind of images processing the neural network is doing, if the computer wants to identify a face in a person maybe it doesn't need to process all pixels once it has processed all the pixels near the face, but in some cases distant pixels can indeed be correlated, like the images from a camera in an autonomous car identifying the white lines of a street, where it could be 99% sure it is a straight line but the corner pixels clearly indicates that is curve line.
@@ravener96 Yeah, but interconnects can be designed around with clever architecture to an extent. It's still quite interesting.
Flash cells are micron scale while the AI accelerators doing integer operation are built with the latest 4 nm technology. And floating gates have really limit life compared to pure logic circuit.
Freaked me out to see that opening analog plug board. That’s how I learned programming in my first data processing class at Fresno State University-in 1964. Eerie to have that memory rise.
Damn you're old! Glad you're still kicking around
@@Ozhull he’s only like 75 chill
Lol
Woo fresno!
I graduated Fresno State in 1993 with a BS in CS (after a stint in the military). I even took an artificial intelligence class. I still have my perceptron book.
I remember seeing an analog differential calculator in high school in my physics and electronics teacher’s lab. It was more of a museum piece. It was never used.
RIP Mr. Stark
To be fair, many of the tools we use are analog. We just don't call them "analog computers"... even though, they kind of are.
Exactly. Museums is where analog computers belong.
@@rogerphelps9939
The words of someone who knows nothing but his own little world. And he is content with it. Honestly, i'm jealous.
For real, stay that way or life will get an awful lot harder. I would give everything i have to aquire such luxury.
The problem with this system of computing is that interference is a huge factor. When you only test to see if there is voltage or not then you don't need to worry about interference. But when you get into building systems that use varying voltages say 0v 0.5v 1v then you need to worry about interference, and the more numbers you add the greater this factors into being an issue. Interference could come in the form of microwaves, radiation, mechanical vibration (think fans cooling off a server rack,) and the list drags on as almost anything can cause interference. That ostiliscope used in the example is an expensive piece of equipment that minimizes interference, but the cost is far higher than with a binary number system.
Binary computers have to deal with interference too, that's handled by error correction. Error correction is already baked into the infrastructure of every digital component, to the point where we don't realize it's there. They suggested one method of error correction in the video, and they're probably not even scratching the surface of what's possible.
i think the crux is the medium, the AI models brain which is so good at re-building itself, and it uses electron and chemicals to convey information. our transistors are too limited to mimic that interaction. for example the new 3w chip needs to be custom made for each model if i got it right.
@@introprospector Yes, but with binary error correcting is simpler as interference isn't as much of a burden on the architecture. When the job is to check if there is, or isn't voltage it is a lot less complex than checking 8 different voltage thresholds.
@@fltfathin This is called an ASIC (Application specific circuit) the computer is pretty much just sent to the landfill after it's outlived it's usefulness instead of having the ability to be repurposed. Which is another concern about where computing in general is heading as PCBs use less and less semi-precoius, or precious metals there is less incentive to recycle.
@@dust7962 Yeah but I put my mobo in the case first and then the radiator wouldn't fit, so I had to take it out and install the radiator first. It was really annoying. I didn't watch this video, but I'm assuming this is what he was talking about.
I used to calibrate analog computers that ran experiments and test equipment. They were often odd mixtures of analog and digital technologies. Near the end I had to keep a few machines alive as they aged out of tolerance, there was always a way you could tweak out some more performance by shifting the calibration away from areas you didn't need in a much more forgiving way than any new digital could.
Anyone knows some Good Science-Channel for me to cxheck out?
thanks a lot Brian Boni for your valuable input
(keys: computers: mix of analog and digital)
@@nenmaster5218 Nile red and Nile blue
@@yuro5833 Thx!
Know Hbomberguy?
@@nenmaster5218 I actually thought I didn’t then realized I had seen several of his videos and forgot about him so thank you as well
wow that analog chip sounds extremely competitive. im surprised they already have something that good. mad props to the guy who figured out the hack with the flash storage
I wonder how temperature sensitive it is.
@@dorusie5 I'm mostly curious about the write-cycles and lifespan of the flash cells. Is the network going to get Alzheimer's after a few days?
integrated circuits like that have always been really efficient - the downside is that they are extremely specialized... as the guy said: its not a general computation chip
it can literally only do matrix multiplication - but that it can do really damn efficient (though slightly imprecise - which likely still is good enough for neural network purposes since they arent interested in the exact value of the result)
so...sure thats competitive for that one purpose - but useless for anything that isnt that purpose
but if the type of calculation that they can do is in high demand - they likely can sell a lot of specialized hardware either for specific devices or plug-in cards for computers that supply fast matrix multiplication operations
I've been wondering about analogue usage of SSDs for a long time. It's an oversimplification, but each cell holds a voltage which can also be interpreted as an analog signal. If we take music as an example, you could basically write the value of one sample point to a cell, writing 16 bits worth of information to 1 NAND cell. This of course makes it impossible to compress the music, but it would allow to store music 'losslessly' at the same cell usage as a compressed 256kb/s file on TLC storage.
Of course, NAND reproduction isn't perfect (and as such, music reproduction wouldn't actually be lossless), but I wonder how close this would come compared to the compressed digital file. I think this could be potentially useful for offline caches and downloads from Spotify for example, as the data can be checked and corrected when a high speed network connection is actually available.
Already? We did this before 1960.
I just found out that the SR71 engines had a hydraulic computer that ran the system. That would be amazing to see. I worked at Caterpillar and a friend of mine was tasked with converting a craper transmission module from a hydraulic base to electronic. It was a very old design and all engineers had passed on. They had a team of engineers that had to replicate all hydraulic functions into an electrical equivelant. It was fascinating to me. One of the functions they had to replicate is going up a steep hill with a full load and being able to shift without rolling backwards. Holding the load, sharing the load with two clutches, and increasing one clutch while reducing the other clutch to make it a seemless shift. So enjoy this topic. Thanks!
Most nuclear power plants are pneumatic computers. Well, the old subs and breeder reactors anyway.
This was one of the best layman's explanations of neural net training models that I have ever seen. Awesome content!
ok
but it isn't, he literally said he's going to skip back propagation (which is how models are trained nowadays)
i am a layman I did not understand a bit of it, pun intended
The most interesting thing I found in this was when he was saying that in the chip they had to make it alternate between analog and digital signals to maintain coherence. It's interesting because the brain does something similar where it alternates between electric pulses and chemical signals.
The problem is machine learning is still not capable of being used commercially in general environments (e.g. security cameras) because they can't handle unpredictable situations. The brute force method of AI is still the only solution for general environments (e.g. self driving cars)
@@chrisfuller1268 people also suffer from the brute force nature of processing information, aka confirmation bias. Thankfully we can take steps to correct this, but many people lack the tools and mindset to make these self corrections.
@@riskyraccoon Yes, humans are flawed, but we are capable of recognizing objects no matter what else is in our field of view. This is a task machine learning will never be able to solve in 100% of all possible (infinite) environments. Brute force AI requires more development effort, but is capable of also identifying objects in many environments. This is why machine learning is a step backwards in technology and why it should never be used in life critical applications.
@Adam H Amen, I never thought of the beast as an AI! The beast will be cast into the lake of fire so I believe he will be flesh and blood human with a soul, but the 'image of the beast'!
@Adam H yes, that is a very interesting way of looking at it! I think we're a very long way from an AI being able to reason, but we've been using AI to kill people for decades.
Rosenblatt’s Perceptron was essentially a one-neuron network, although he could perform logical operations on the binary data inputs before passing results, which gave it more power.
Minsky and Papert at MIT were concerned that Rosenblatt was making extravagant claims for his Perceptron and scooping up a lot of the available funding.
In their book, “Perceptions”, Minsky & Papert proved that one-neuron networks were limited in the tasks they could perform.
You could build networks with multiple Perceptions, but since Perceptrons had binary outputs, nobody could think of a way to train networks.
That killed funding for neural networks for decades.
In the late 1980s, interest was re-kindled when John Hopfield, a physicist, came up with a training technique that resembled cooling of a physical spin-glass system.
But the big breakthrough came when the error back-propagation technique was developed by Rumellhart, Hinton &Williams.
In this, neurons were modified to have a continuous non-linear function for their outputs, instead of a thresholded binary output.
Consequently, outputs of the network were continuous functions of the inputs and weights.
A hill-climbing optimisation process could then be used to adjust weights and hence minimise network errors.
The rest is history.
And now, we're "evolution" but with awareness and intent heh
Critically wrong. Not one-neuron - that doesn't even make sense. One *layer*.
@@slatervictoroff3268 I have to disagree. Perceptron is one-neuron (one neuron that receives multiple inputs and puts out one output). This makes it also one layer network I would say
@@brunsky277 thinking about it though, the inputs all had their own weights, those weights correspond to a neuron. A modern ai model has inputs, and the weights exist on the layers.
Therefore the perception had 400 inputs, 400 weights, and 1 output signal.
That implies to me 400 neurons, in a single layer, leading to a single output value.
@@slatervictoroff3268 No, one layer can be multiple perceptrons, it’s technically one-neuron (which is technically a one layer though)
Great episode. Hadn’t considered the mix of digital and analog computers in a complementary fashion. I guess it’s not what I thought!
True, but the actual impact of this is not what you think.
Jesus, could you astroturf a bit harder please?
Until the 90s, US war ships used mechanical calculators to calculate aiming the guns, something that would be perfect for your channel.
I see what you did there.
BS! Fourier ... ROTFL
Using an analog computer to demonstrate differential equations is a perfect teaching tool. I really wish tools like this were used in colleges more often.
They already use crap like this far too often. This isn't something use for for a differential equations course. Maybe it would be ok for a circuits course or even Computer Science course focused solely on analog computers. In math, just give us the numbers and the logic... Don't waste are time with this stuff.
@@JackFalltrades I am an engineering student.
@@JackFalltrades I'm not calling letting students know of use-cases for things crap. I'm calling taking up class time that is meant for teaching students the logic of how to solve differential equations (because that is the class the original poster said it would be good for) and instead using that class time to teach something that only a tiny fraction of the class would ever use.
it is used in control systems
@@Elrog3 He never said it should be used in a differential equations course. You just sound like the type of students that go to university and ask which courses they need in order to get a high salary position in industry.
The biggest issue is distortion. Inexact calculations due to imperfect components, degradation of the data when transmitted (wired or wireless), external EM interference, all conspire to make the use of analog a special challenge. Mixing digital and analog to play to the strengths of each along the way intrigues me. I'm old enough to have experienced the full evolution of digital computing. My mindset is therefore quite biased toward it. What you propose would be quite the eye opener for me, if it actually can be made to work as prolifically as current digital technology.
I assume there's a lot to be discovered on the topic of self-correcting algorithms, or even error-correcting analog circuits that compensate partially for the inaccuracies. Like what Hamming codes are for digitally transmitted data.
nature exists in chaos, technology is more and more approaching the chaos orchestra.
Distortion really? I am under the impression that the biggest problem with analog computer is NOISE. You can never get rid of noise in an electrical system. Even if the hardware has no distortion, the inherent thermal noise in the system will cause some small calculation error.
@@StevenSiew2 that's true, but noise is something that AI needs to deal with anyway because the inputs will always be noisy to begin with. It can actually be useful to _add artificial noise_ while training a digital NN, to avoid overfitting issues. (Stochastic gradient descent can also be seen as a way of making the training “noisy”). As long as the pertubations are small and random, training won't be affected negatively.
Distortions however are hard to deal with. You may be able to train a model on a particular chip that has such and such distortion; because the distortion properties don't fluctuate and constant-but-unknown biases, the weights will ruthlessly overfit to this particular chip, and then it probably won't work at all on another copy.
As a PhD student in this field, I can answer some of your questions.
Yes, we usually talk more about noise than distortion. And thermal noise is not the only issue, there is read and write variability, resistance drift in time, the resistance of interconnections, ...
However, it is true that neural networks can sometimes take advantage of the noise to avoid overfitting, but only a reasonable amount of noise and only in some cases.
Self-correcting algorithms and error-correcting are options, but it's not that easy. Usually, this kind of method sacrifice the performance or requires more energy (which is the opposite of what we want).
About the mixing digital and analog, they presented it nicely in the video, but the digital/analog converters required a lot of energy (sometimes more than the vector-matrices multiplication itself). So we don't want to do it too often.
I remember the first time I heard the term "deep networks", it was back in 2009 when I was starting my MSc (using mainly SVMs), a guy in the same institute was finishing his PhD and introduced me to the concept and the struggles (Nehalem 8 cores era)...the leaps in performance made in NN since then thanks to GPGPU are enormous
A couple missed points:
Things like google's Coral are also pushing incredibly high values, and to my knowledge are doing it digitally as an ASIC.
Large models are expensive to train, there's no contention here, from mythic, you, or the wider AI community, but several advancements have been made in the last couple of years, that are letting models be compressed and refined to less than 1% of their original size. This makes them incredibly small and efficient operations, even on traditional CPUs.
I'd love to read about that! I've been dipping my toes in ML algorithms and many of the really interesting networks require an immense amount of memory to function, on the order of tens of gigabytes. I'm curious why those models require such an immense amount of memory, and what can be done to improve that.
@@Zeuskabob1 u don’t really need 10s of gigabytes to get a good model that can perform well on a task (usually). Most people still use models of size less than 5gb or so.
+
thanks for pointing this out 🙄 seems like it was deliberately ignored for the sake of promoting this dumb startup
just another case of Veritasium making a bait video to make experts respond
I thought there was no other part to the first part, thank you for the satisfaction you have given me through the knowledge I got from this video
Can't wait to watch this with my kids. I forgot it was coming, we were waiting so long for it.....
NO! NO! NO! Many people say I am sick in the head. NOOOO!!!! I don't believe them. But there are so many people commenting this stuff on my videos, that I have 1% doubt. So I have to ask you right now: Do you think I am sick in the head? Thanks for helping, my dear nico
@@AxxLAfriku 0
@@zaksmith1035 That's awesome mate
@@AxxLAfriku I'm honoured to have your bot-like reply in my comment. As for the answer, well, I don't know, but have a good day mate!
Back in the mid-1960s my uncle, Joseph Grandine, designed a combination analog/digital computer that could optimally combine the two modes to solve complex problems in signal processing and data analysis. He called his computer the Ambilog 200. At that time, digital computing won the day. Now it sounds like he was a few generations ahead of his time.
Amazing. Just found two articles about it in the Internet Archive.
That is awesome
That's amazing. We should show him this video and ask him what does he think about it.
cap
I would love to know him
this make me want Derek to talk about Neural networks and AI related topics a lot more.
its not just extremely interesting but also constantly developing.
i've always mused about this to myself, i always thought 'why not use analogue to calculate certain things', theres lots of stuff in physics that's extremely hard to calculate, but just 'happens' in the real world in an efficient way, the surface of a bubble for instance minimises surface area very rapidly in a way that takes no effort on the bubbles part, but is incredibly hard for a digital pc to calculate. the tricky part (and the reason people doing this are smart scientists/engineers and i'm not) is figuring out how to wrangle "the bubble" into a portable and responsive piece of hardware, and it's super cool to see efforts made in this direction are having success
Same thought. I used this technique to figure out a way to solve mazes super efficiently with flowing water. I think that’s what’s happening with quantum computers also.
It always comes back to the drawbacks Derek mentions at the beginning of the video: Analog processing is single-purpose, error-prone, and hard to repeat.
As such, for your physics example, it would invalidate A LOT of the data you get back, since you cannot guarantee a certain level of falseability, auditability, and error margins.
You CAN get there, but you start requiring A LOT of boilerplate circuitry around the actual solution solving hardware. As a silly and basic example that is almost trivial nowadays, but still there, you can think of the necessity of adding a lot of surge protection and current stabilization to a circuit to ensure that the natural unsteadiness of current in the power grid won't skew your results.
And that's even just taking into account discrete and "simple" issues to calculate. Imagine processing data for some chaos-related physics theory, and basically getting pure rubish at the end, because even the slightest micro-volt level of disturbance automatically distorts everything.
How about external interference? Or electromagnetic interference between the actual wires in the circuit?
As I said, not imposible to tackle, but you suddenly have an overhead of 90% boilerplate just to make the results useful on anything practical.
I can't even imagine all the engineering that must have gone into those Mythic semi-analog chips for AI, just to keep everything tidy. The fact a Realtek-sized chip can give you one third the performance of some nVidia Quadra (or similar) card, for a fourth of the power consumption of a cheap entry-level mobile Core i3 is just astounding!
To be clear, these Mythic chips point towards a future resurgence of analog processors not dissimilar to what digital ones brought in with their unparalleled versatility.
Outside of very bespoke chips for very high amounts of money, probably in the realms of very high research, science, etc; I can see a general idea of modularity at a functional level. Say, you manufatcure analog chips that can do some very important but expensive math calculations that are common for most science in some specific branch (say, a lot of transformations, or integration, maybe some Lorentzians, and so on). Then, at research groups, institutes, university, they go and do the same as electronic engineers do with good old breadboards, and DIY some complex formulae on the fly, test their hypothesis, and iterate over the formuale as needed.
Imagine those astrophysicists doing 2k term polynomials, being able to duct tape a dozen chips together, the same way electronic engineers use logic gates as basic digital units, and getting the results out in a couple of hours, instead of having to write a piece of software that will take a couple of days to run, a week to write, and any mistake or failed result requires another week to debug just to make sure it failed because you were wrong, and not because you input a 5 where a six should have gone when writing all 1500 terms for one of the formulae
Yes I've always thought this too.
Aside from the negatives mentioned in the vid and comment above:
"if you can't calculate it, let nature do it"
I've used the term 'calculate' here, but I think it applies in a broader sense. If something is too hard to manufacture / produce precisely maybe nature can do it better.
@@yoshienverde wish I understood what you were saying but great rebuttal
One of my first "computer" classes in engineering school was learning to wire up an analog computer and solve differential equations. Because I had to "assemble" the hardware for the process, it felt much more hands-on than when I took a punch deck to the little window, and waited for up to 20 minutes for the compiler to tell me I had no idea how Fortran worked. At the time, I really appreciated that parameters on the analog could be changed quickly in order to see how different currents, voltages, resistance, etc. affected the outcome. Of course, now with the speed of digital processors, the efficiency of Python libraries, and the Interwebs, I have largely gotten to appreciate the digital world. Now, Derek has got me jazzed to buy a portable analog. $200 on Ebay?
Yeah, my computer classes in engineering school had a similar thing, though with us it was opamps. It was not a full class, but we did it around the same time as learning FPGAs and having to implement complex programmable digital logic, so it was a good reminder of 'digital logic with an ADC/DAC pair is not always the best or simplest solution'
While it's absolutely not the same thing, I encourage newish programmers to write a 6502 emulator. It's about as close as one can realistically get to building your own CPU hands on, which IMHO gives a worthwhile different perspective to the field than the now common approach to never leave the comfort of interpreters and virtual machines.
There's an entire field of research called neuromorphic computing/engineering looking into this very problem. It was pioneered by Carver Mead in the 90s and has seen a lot of interest lately.
I was waiting for him to either mention the words "neuromorphic" or "memristors"
I remember reading about Mead's analog stuff in the 1980s, something related to hearing. Perhaps my memory is wrong.
Great video. Never knew this was a thing. Very useful. Might one day just be an extra part on the motherboard designed for fast approximation calculations
I'm a little disappointed by the title but impressed by the content.
It's less "we're building computers wrong" and more "old method is relevant in a niche application".
There's also the eventual plans for fully commercial quantum supercomputing clusters and ever faster internet connections which might further limit the applicability of these chips going forward. However, building processing-specialized chips instead of relying on graphics cards seems really promising in the short term so long as the market stabilizes.
Derek actually made a video a few years ago explaining why veritasium would start using clickbait titles (to appease the youtube algorithm)
It got you to click didn't it?
It’s for the click bro. And for good reason. If any channel deserves to clickbait, it’s this one.
@@dinglesworld agreed 100%
its not really niche when ai and algorithms are used everywhere.
Yeah, the old Navy fire control systems - along with directional aspects of sonar/radar - were analog from beginning to end, and the math required to come up with a fire-control solution that was stabilized in 3d on a moving ship, was intrinsic. It didn't compute as we think of it today - the problem and the solution were just a single feedback loop.
I remember early in my training when I grasped this, it seemed like magic. Completely steeped in digital computation in my current work, it still seems more magical.
Fascinating! I was one of the first engineers to 'train' a computer to recognize handwritten numbers. it was used for reading zip codes for post office sorting. It worked quite well and the method I dreamed up is what is used today. Namely, getting many samples (I sent pages around the office asking people to fill in boxes with zero to nine numbers. The variability in human handwriting was amazing. Then I separated each box into nine areas and a program determined if an area had a mark or not. By playing with the various combinations, and tweaking it for often confused numbers like 5 and 6, we got a very low error rate. I'm happy to see I was on the right track sixty years ago.
Great job, optimization is wonderful.
Bruh wtf....u r a legend
Indeed! Great brain!!
Now, learn about and teach folks why we have "zip" codes, and precisely why. You'd be surprised.
Wow! Just wow!!!
Great Work Ser
I agree, one point I was going to mention regarding analog computers is that they are susceptible to voltage fluctuations, environmental noise and the accuracy of your results are directly dependent on the accuracy of your equipment reading the output voltages. There is that, but this makes sense when talking about specific applications like this one 👌
Amazing video!
Wow nice video
Nice video
My dad's "Back when I was your age" stories on computing were about how he had to learn on an analog computer, which, according to him, you "had to get up and running, whirring at just the right sound--you had to listen for it--before it would give you a correct calculation. Otherwise, you'd input 100+100 and get, say, 202 for an answer." he hasn't been able to remember what make/model that computer was, but i'm curious. any old-school computer geeks out there know what he may have been talking about? Era would have been late 60s or early 70s.
It sounds like your dad's computer was before the invention of the transistor. There was an analog computer at the electronics lab at the university of Hull, UK (when I was a student there in the 80s) that had moving parts. I remember when it became unstable and the professor sprinted across the lab to shut it down before it self-destructed. Something spinning suggests a sine wave generator for example.
Yeah, just like analog synthesizers. You have to let them warm up to a stable temperature first or they would constantly drift out of tune while playing. This was later solved with digital controllers.
Maybe the older mechanical calculators. Linus Tech Tips did a video on those. Super interesting stuff.
I can't address your question directly but in the later half of the 1960's I worked on a helicopter simulator, used to train military pilots, in which all computations simulating flight were performed by analog circuits made up of transistorized (no IC's) operational amplifiers and servo motors with feedback.
This whole machine was housed in a 40 foot long semi trailer. In the rear of the trailer was a cockpit from a CH-46 helicopter including all the controls and instruments but the windows were frosted over so you were always flying IFR in a fog, i.e. no visuals. Next as you moved forward was an operator's station where you could control parameters such as air pressure and temperature, activate failures such as engine fire or hydraulic failure and such. The remainder of the trailer contained a row of electronics racks on each side housing the amplifiers, servos and other circuits that performed all the calculations.
We can look at main rotor speed as an example of how it worked. Rotor speed was represented by the position of a servo motor from 0 to 120 degrees. The position of the motor was determined by the output of an amplifier whose inputs were derived from many variables such as engine power (there were two), collective control position and altitude. Attached to the servo motor was a potentiometer whose output drove a cockpit instrument but was also fed back to amplifiers/servos which were used to calculate engine power and such.
There were many such subsystems with feedback loops interconnecting them so that failures were very difficult to diagnose. Often the only way to resolve a problem was to take a guess at which part might have failed and replace it. Also routine maintenance was very labor intensive as the many potentiometers would wear and need to be cleaned and then realigned which might take an hour for each one.
As a young man I was totally amazed and fascinated by this technology. As an old man I can't believe that it really worked at all. But it did, at least some of the time.
Back in the day, circa 1957, I was an Electrical Engineering student at the City College of New York. In one of the labs we constructed an Analog Computer using physical components like Motors, Gears, etc. There was absolutely nothing binary/digital involved except weather you passed or failed the course.
A couple of years later I worked with a Bendix G15 computer with an optional DDA (Digital Differential Analyzer). The DDA was an analog computer Input and output were analog. You can look upon Google. Search for " Bendix G15 computer with dda "
This was a banger of an episode. I was enraptured the entire time. Tight story telling with a great hook and title. You're a pro, man.
Fancy seeing you here ❤️
hm
hook, line and thinker
Damn Canadians keep blowing my mind.
TELL THEM TO STOP IT!
SARS CoV-2 was patented with UNITED STATES after being developed by Pirbright Institute in UK
I'd be interested to see a digital computer adopt an analogue component, possibly to be utilized in situations of physics simulation, much like how a GPU is utilized to independently create graphics from the CPU.
So, this was Harold Finch's solution in Person of Interest. His ability to create an autonomous observant AI to identify dangerous behavior was the result of Rosenblat, and he did it 15 or 20 years before anyone else would even attempt to do so.
Missed an opportunity to mention him in PoI. Neumann was mathematics, Turing is the father of modern computing, but Rosenblat was a maverick on the nature of neural networks.
Oh my god thank you for bringing this up. Ever since i watched that show I have been set on learning everything and anything about AI. It has inspired me set me on my current course in Comp Science.
Fantastic video and I learnt a lot being a biologist. Small correction, neurons (the real ones) are indeed analog in the sense that they can tweak their output and fire, fire more, fire less just like an analog computers. This happens by a combination of changes in neurotransmitters, their dumping at the synapses and adding neuropeptides that can change ‘gain’ from the neural networks.
Exactly! The analog behavior of neurons is closely modeled by the analog current/voltage exhibited by the tweaked transistor cells, as so well demonstrated and visualized in the video.
Neurons have a lot going on inside, and things are happening outside, that affect what they do and when they do it. It amazes me that computer neural networks work at all, let alone as well as they do.
Ya, so take what he's wrong about in the field you know and apply it to the field I know(comp sci) and suddenly the entire video is a load of rubbish.
@@kalliste23 Don't get me started about the neurons in a squid vs a human... they have fewer, bigger and more complex neurons.
@@vyor8837 Not a load of rubbish, just amazingly primitive compared to what it's trying to mimic. And also use specific. And not really capable of learning new kinds of tasks. ;-)
Interesting video, but felt a little bit too hyped up for me ^^
The discussed challenge appears to be a highly specific application; matrix multiplication. The solution shown here was an analog ASIC (application-specific integrated circuit), which is a type of chip we've been making for over half a century. Once a tasks becomes both computationally expensive and very specific, the fastest method has always been to make a specific chip for it. Nor is analog multiplication anything new, I remember being taught the little analog multiplier circuit with the Gilbert cell over a decade ago.
most of his videos are like this lol
I believe Derek found a little niche to focus on since he did the video on the ancient Greek analogue computer, which had an almost identical conclusion
I feel he is on to something here ... maybe the real benefit is that you don't have to make all these specific chips, because in principle one fairly big analog one could do everything you threw at it. But it feels a bit scary to me too, because you are getting closer to biological systems.
Yeah, the reduction in power consumption I'd imagine is mostly due to it being an ASIC, not being analog. There are quite a few digital AI inference ASICs coming onto the market as well - I'm curious to see which ones will reign supreme
Hi: Unfortunately I can not speak English, so I have to use the program I have installed on my phone for translation, except that I can speak Persian, I do not speak any language other than Persian and no other language. I can not speak, so if I said words or sentences and the special program mistranslated into your language, I apologize in advance to your esteemed father, from the bottom of my heart and from the bottom of my heart, that the letters and words Call me to the end and call me a person orDo not consider me a rude person and do not consider me a rude person and judge me correctly and after reading my writings, just put yourself in my place for a few minutes and imagine yourself in my place, maybe if you have an awakened conscience and There was love and affection in your hearts, of course, if you did not have pride and arrogance, understand me and give me the right, and again, maybe, maybe, maybe you did something and you did it for me and you took me from You saved this great tribulation that I hope will not happen to any living thing, of courseI do not very much hope that anyone will take my hand and save me from this misfortune, but I am writing so that I can at least be comfortable in front of my conscience and not blame myself later if I cry out for this cruelty, oppression, and captivity that has fallen on me. All the doors are closed to me, even the eyes of God are blind to see the oppression that is being done to me, and the ears of God are blind to hearing the cries of my constant cries, my midnight cries, my daily sufferings, my daily prayers and the jurists and The cries of my every moment from this oppression, oppression and cruelty that from the beginning of my life, fromThe first events of my life that I remember are deaf. So how can I hope for others when God has done nothing for me and trapped me in a cruel, cruel, cruel, and oppressive family? I am Morteza, I am from Iran, I am blind, I am 34 years old, I am unemployed because in our country, Iran, there is no work for healthy people, let alone disabled people like me. I live with my family in a small town in Iran. Of course, in appearance, they are my family, but in reality, they see me as their own enemy, and with me, who am their child, only bI was born blind, and I was not to blame for this, but my parents, because they were illiterate, considered me as a disgrace only to myself for my blindness and disability, and were always tortured as severely as possible. They beat me to the point where they threatened me with a knife. They put me to sleep and put the knife on my neck, and I was terrified and scared. Get up and until fullThe thin skin of my hand burned, they would not let me go. And they tortured me many, many times, to the point that my brothers, with the support of my parents, tortured me in front of my parents' indifferent eyes, and told me that I was blind. You are and you should be tortured to the extent that they created the belief in me and in my mind that anyone who is blind or disabled should be tortured because it is a disgrace to the family and society. And unfortunately, their torture is still going on, and only a kind of tortureThey do not consider it for him, so how can I complain to someone who considers my law as his property, of course, the current laws in Iran. Unfortunately, the government does not have a place for people like me to go and live. I really have no choice, either I have to commit suicide or I have to stay in the same house that my family has turned into hell for me, under the severe mental and physical torture that my family inflicts on me. And more than ever with a horrible gradual death that tormentedI want to stay. So if you still have a little mercy, fairness, conscience, compassion, love, humanity, knowledge, ideology, humanity and altruism, help me, hold my hand, reach my cry. If you are in contact with institutions and organizations affiliated with the International Committee of the Red Cross, or if any of you are a member of human rights organizations or the International Committee of the Red Cross or Red Cross organizations in free and pro-human rights countries, listen to my writings, my voice and my cry. Come on, maybe they're a little fair and think of me and a way toSave me from the clutches of ostensibly wolves, of course, if human rights institutions and organizations and the Red Cross actually support human rights, if you are a citizen of a free country or a citizen of a free country like the United States, Canada, Australia, Great Britain, Norway, You have Sweden, Denmark, Finland, and the European Union. Send me invitations, arrange for me to leave this house, which is worse than hell, so that maybe I too can taste freedom. If you have capital, you are rich, you have money, at least help meLeave the messengers, so that all the photos on my personal pages in the messengers and social networks are from 8 years ago, that is, for 8 years ago and 5 years ago. I have not even traveled for many years, because our city is a small city. It is sad for a disabled person to leave home, it is sad for a disabled person, the streets of our city are not adapted so that a disabled person can easily be at least a little out of home. And I'm really depressed at home, especially with this family that is always torturingAre. Of course, I do not care if I travel or not, because my problems are so great that not traveling travels much more than my other or other problems. In your opinion, can a person who eats only one meal in 48 hours, which is a good meal, even a moderate meal, but also any garbage he gets, take pictures, travel or not, and other things? Slowly, someone who breaks his heart at every moment, of course, the work of my heart is no longer broken and my heart is on fire andIt is burned, and this fire is getting more and more hot and burning, and it is burning and ashes my whole being. So I hope that if you do not hear the sound of my heart breaking, at least be fair and feel the smell of my heart burning, away from arrogance, arrogance, misguidedness, ethnicity, race, skin color, shape and appearance for the sake of humanity and for Humanity and honor that you have, just imagine yourself in my place for a few minutes, see yourself in my place and think, if far away from you, God forbid, you would be in my place.Did you ??? Were you satisfied that someone was making fun of you or disrespecting you, or did you laugh at you in response to your message and pain and heart ?, so my dear friends and those who wrote my writings to you Please, if you do not want to do anything for me, at least do not make fun of me, do not laugh, and if you do not want to help extinguish the fire inside me that burns my heart, at least do not spill oil and gasoline with ill-considered language, words and expressions, and this is far from knowledge, honor, It is humanity and family originality. If you want to give me anyPlease help me do not send me a message on UA-cam because I can not transfer messages from UA-cam to the translation program that I have installed on my phone, and as a result, because I am not fluent in any language other than Persian, I can not understand the meaning of messages. Let me know what you sent me on UA-cam. If you wanted and could and the possibility of helping me in any way, whether financially, materially or spiritually, please contact me on WhatsApp or Telegram, because in WhatsApp and Telegram I can easily send messages.00989358205228 00989182804420 00989358205228 00989182804420 00989358205228 00989182804420 00989358205228
😞😞😞
You can look at it as digital is a jack of all trades and analog is a specialist whos' inserted into very niche areas along the production line.
This is missing any mention to the other big alternative: Photonics. Startups like Lightmatter have shown that this is another very potent alternative. And I believe its benefits, of not being limited by electronic bandwith/losses and the ability to use one circuit to calculate the same calculation multiple times at the same time by using multiple colors/wavelengths is just astonishing. It was also left out that a big problem of these systems is the bottleneck in the conversion from general compute to these analog domains.
Hopefully he covers this topic in the future
how are u so smart
Probably because it is also an emerging system. But also because photons are used like electrons as the actor, a new actor, while the video is explaining two fundamentally different ways to act.
I was waiting for him to get to photonics, too. It's a HUGE opportunity to crazy amounts of parallel processing. And then there's quantum computing white whale, too...
I want my positronic brain patch
Awesome information!
I gave you your first like 😁
@@Mani_Umakant23 Why would you like such an unoriginal comment that provides so little value or thought?
@@N____er Aise hi sexy lag rha tha.
@@N____er Don't say anything bad about ElectroBOOM he is such a wonderful creator
Hi sir I am a big fan of yours
\
Some years ago as I was finishing up on my computer science education, I had to do a project for my finals (this is not university, btw, just a secondary kind of route available to me in my country) and I had always been fascinated by artificial intelligence and neural networks, so I picked to do something in that realm. I never had anything to do with AI prior to that, so my knowledge was quite shallow when it came to how artificial neural networks actually work. I had been programming for quite some time at that point so I had my basic tools in place, so to say, but I really didn't know where to start from there when it came to what I picked as a "task" I wanted to solve by this method.
The task I picked was driving a little simulated 2D-car along a randomly generated road without any human input and unfortunately, the language I was most proficient in at the time was Java, so I tried to implement it in Java.
When I first started the project, I read a lot about neural networks and even included some scientific papers on it in my reading, but not stemming from any kind of scientific background, I really struggled to understand the deeper concepts presented in those, which eventually lead me to just abandon reading up on the topic and just approach it with the general concept of "How could/would it work?".
Looking back, I think I have never in my life experienced a time when I was more challenged by a task I set out to complete and also never before I was thinking that much about how our brains actually process information, as that is the key idea behind it. Yet, slowly but surely, I ended up with a pretty similar concept to what most neural networks do, even though probably far less efficient than using something that was already available like TensorFlow.
Where am I going with this story? Well, after a while it actually worked, the little network I created "learned" how to drive the car along the road after countless iterations of doing it wrong and to this day, it just absolutely blew my mind on so many levels that this was possible. Even more so considering that all of these complex processes take place in our brains every microsecond we exist with much more proficiency. I absolutely loved that project from beginning to end and if not for the rest of that educational path I took and years of aimlessly programming for fun, it just made me know that I truly love the entire concept of using programming to recreate something in nature.
Kind of an pointless anecdote, sorry.
It’s not pointless, that was a good story.
@@Velocitist Thank you, I just felt like reminiscing and sharing the experience, which was actually quite enlightening to me, after watching.
No pointless at all! Many of us are developers, and that means working and studying with pet projects.
From my point of view, the way to learn is developing from scratch, and in parallel learning already built up libraries or dedicated software for it (I'm playing with a bunch o neurons made from scratch, and opencv).
Cheers!
that’s sick
That analog neural network was really interesting. But to me it's still essentially digital, i.e. discrete.
In a normal digital solution you might have 16 possible values for the weights which would be encoded as 4 bits and would need to undergo addition/multiplication. But in the "analog" solution you encode the weights by setting one of 16 distinct voltage levels. The available voltage levels are quantised, not continuous so it's still a discrete system.
It's great that you can do addition by just summing currents and multiplication by changing resistance. But you can even do this with binary: AND gates are multipliers and OR gates are adders if you only have 1 bit of data (1 OR 1 gives an overflow condition but the "analog" design will need enough voltage levels to avoid overflow also e.g. 7 + 13 would give the answer of 16 if this was the highest voltage level). I'd say it's still digital but it's not binary. It's multi-level logic.
"Better suited" is the key. Quantum computing will fall into the same clause: there some things quantum computing is "better suited" for.
quantum computers (one that use physical q-bits) are only hypothetical, but people talk as if they already exist in reality. They don't, there is not a single, fully functional quantum computer on the planet, and there might never be.
@@BreaksFast Sure they exist, they just don't have very many q-bits.
@@BreaksFast The computers do exist, but there is a lack of understanding in programming them to do classical computing problems.
@@BreaksFast They can and do exist, albeit with very limited qubit counts. The first experimental demonstration of one was in 1998. D-Wave Systems are selling computers with 2048+ qubits right now.
@@BreaksFast Tell that to IBM?
Analog computers are actually a hardwired set of circuits “programmed” for a particular task. They excel is massive parallism, og true real time performance. In additional to analog circuits built using transistor or tubes, optical devices such as prisms (or rainbows) do real time spectrum analysis at light frequencies. And have real time color displays. To duplicate the performance of an optical prism at those frequencies using digital circuitry, would require massive arrays of digital hardware multiplier/accumulators. I did the calculation once in the mid 1990s and at that time it would require about 600 MW power. Early spectrum analysers developed for military applications, took audio or radio waves, upconverted them to light, and used a prism to make the spectrum analysis.
If need be, could analog computers be made to go digital temporarily? If so, it would mean that they can perform accurately for a time and then go back to analog for complexity.
@@LeTtRrZ I don't think so, they are so fundamentally different it would be hard to integrate them. It would be easier to have 2 circuits
@@LeTtRrZ There are tons of studies on reconfigurable architectures, and the theory of how good a digital computer can be at representing analog behaviors (so, I reckon, the opposite of what you were conjecturing) is well known, as are the implied constraints, but the matter remains about what bricks set should be part of the reconfigurable architectural fraction, and in which abundance each. As soon as you decide the number of components available, and the maximum degree of connection reconfigurability, you define a limit on what you can represent in a given amount of time. As a passage in the video suggests, there are studies about the utilization of architectures with digital boundaries between analog slices, but error correction possibility is very often, I'd dare to say always, a consequence of what you know you want to represent, simply because if you don't know what you're representing, you cannot tell whether you're doing right or wrong... At best you may exploit some underneath characteristics of the representational space: e.g.: if you know that your values should fall on one element of a grid you may correct an analog result choosing the nearest grid element, which means, somehow, re-digitalizing the result... But knowing what you're representing poses a constraint on the freedom of the intermediate representations...
@@andreafedeli3856 Why not just allow the computer to ration between digital and analog based on the demand of the task it’s attempting?
@@LeTtRrZ He says that in the video. It's exactly what they do. @ 18:56
Funny, I always tought of the positronic robots of Asimov to be analog computing, but as a programmer, it was difficult to understand how to work with analog instead of binary, but this video makes a lot of sense, and I can see how the combination of voltage and frequency can influence the result and the combinatory power of multiple inputs can, with each weighting differently in the scale, determine the final result. I only understand a glympse, I know, but my imagination and this video allowed to see how it can relate to our brain neural network. Amazing!!
Analog has some sort of an error factor, but that error factor can be used for good in terms of evolutionary algorithms.
The world is NOT Digital aka quantum, but it is Analog. These machines will prove this fact and change how we come to see reality.
@@Mashimara I am not 100% sure, but the quantum computing seems a bit like analogue of the analogue computer the similar principals, but the scale is also much smaller than the analogue computer - the analogue computer is using thousands - millions of atoms to represent the value - the quntum one - operating on the single atom's, electrons, photons and other particles to represent the value, and perhaps also the quntum c use engagement, which I am think no one is quite sure how it works so they explain it using the nonsense ideas, like the communication quicker than light speed. They try to use the paradoxes to explain such possiblity, but they still nothing explaining still breaking the basic phisics law: nothing is quicker than light never ever. But I am not sure 😃
@@RAF-777 oh, I can explain it really easy. 4 spatial dimensions, in an expanding HyperTorus and HyperSphere
@@aoeu256 no analog doesnt. He was describing our implementations of analog. There is no inherent theoretical error rate in analog systems..they are in fact theoretically perfectly accurate while digotal cant be. An analog circuit could theoretically carry the Precise value of pi. A digital one cant
I used this content and visuals for my Electronics Engineering final year technical seminar. I loved the content, and the way it's put together. Thanks for choosing the most interesting stuff to put in my feed.
When I was an engineering student in the 1970's, courses on Analog Computers was required after a first course in linear differential equations. (using slide rules was allowed in class, but not electronic calculators). In your description of analog computers compared to digital computers, we used operational amplifiers to create integrators and differentiators, so there are transistors (tubes before bipolar transistors) in analog computers not just resistors and capacitors only.
@africancheez Engineering is a great profession. Along with Analog Computers we had to take classes with Digital Computers. As a student, I used the HP1100 (CPU , paper tape reader, 1MB hard disk), IBM370 (teletype and punch cards), and PDP11 (punch cards) to learn Fortran IV, Pascal, and Basic. My professors were experimenting with the TRS80, Timex, and Apple 1 personal computers (most arrived as kits) as a hobby (top speed was 1MHz...1977 timeframe). In the computer lab, there was a desk top PC made by Commodore (commodore PET with a built-in cassette tape for mass data storage), but only Grad Students were allowed to use it. It was good to know the difference between solving certain problems using a program on a digital computer versus solving the same problem using a far simpler analog computer. good luck to ya.
I can only speak for my associates degree program but we still do that. This past semester we used IC opamps to make integrator and differentiator circuits. They were some fun labs. I love seeing some of the signal art people make on oscopes.
@africancheez "we have had the technology for a long time, but the world just wasn't ready for it yet."
It was never just a technology problem. Do you want your data in someone else's warehouse? Very often, no. Actually I believe it is never optimum since you lose some 4th amendment rights regarding search and seizure.
On the other hand, a small company that cannot justify the expensive and reliable talent needed to guard company secrets has no good choices and might as well go "in the cloud" but even there dividing up the cloud (some to Amazon, some to Microsoft) would prevent or reduce complete and total disaster.
When I was in the Navy I worked on the Fresnel Lens Optical Landing System. There was no 1% error. It was .005 VDC tolerance over a minimum 5 VDC base. The computers had a lot of math to solve to target the hook touch down point for each aircraft. It was completely analog and op amp driven and has been around for over half a century. I've witnessed many many old analog machines in manufacturing since then. Analog technology isn't new technology, it's forgotten technology pushed aside by the digital technologies. I'm happy to see it hasn't completely died.
In this video you have also given the most simple, straightforward explanation for AI training and inference I've ever seen.
When I was in institute back in 2016 I was thinking of this specific “gates” as well as undergraduate. I knew someone was already implementing it but still missing the time I could be part of innovations. What a genius way of reimplementing circuits for neural networks. Maybe that’s what the FPGA is future is - neural networks
I was a finalist in the state science fair competition back in the 4th grade. So around 1977.
My project was a board about 2 foot by three foot, full of 2 and three position switches and colored lights. It was a logic board that could solve various types of equations. Pretty cool in a time when almost no one had ever touched an actual computer.
In the end I learned absolutely nothing about computers. But I learned to solder really, really well from it.
Moral of the story, if you can't learn to code, at least learn to solder. :)
Never too late to learn :)
Start with arduinos, they're amazing little chips that can do wonders. Even something simple like a temperature sensor might be fun.
lol that's amazing
whats a solder?
@@Kenshiroit Google will help you. But basically it's how you fuse wires together to form a bond for passing electricity.
@TheMurchMan then turn it into a hobby... Maybe do it for fun on weekends or just let it be your side hustle if your location allows it
We studied them in school, 50 years ago. Analog computers predate the digital computer. I wanted to build one way back then, never did.
If it makes my graphics card cheaper, I'm all for it
Finally, a fellow PCMR member
you dreamer you
@@hridayawesomeheart9477 finally, an average redditor
It sounds like it could make GPUs more power efficient. GPUs are starting to use AI to make certain computations more accurate, so maybe an analog chip on our GPUs could handle that instead.
@@BlueDrew10 I agree, but the first major bottleneck is, like he said in the video, the massive power requirement to teach each AI, each needing 3 household's combined annual energy usage, mass production seems inefficent.
This just goes to show that no knowledge is useless. When I was in my final year of my undergraduate degree ( Electrical Engineering) I took a course on analog computers and the general consensus was that this field was obsolete. That year was the last year that the course was taught as it was phased out in the new curriculum.
Well done. My teachers told me things in a more obscure way, skipping the point, in order to expel more students from the university.
I remember my friend once made a completely analog line follower robot. He implemented a PID controllers using Opamps and trimming the parameters with 3 variable resistors. It actually worked quite well!
That's pretty creative and clever of him.
opamps are wonderful
That sounds like the micro-mouse project. In the UK at schools in the late 90's it was a challenge for electronics students (electives yr10-12). Really fun finding solutions to the problem of navigating a maze either on paper (a line) in a 3d maze or both.
Your comment was stolen :/
Brilliant to use flash memory technology for analog computing!
When I think of analog computing I think of using operational amplifiers, resistors and capacitors to do the calculations. These are very limited in bandwidth, so I thought optical processing would take off. Lightmatter has produced a light processor that can do large matrix multiplications and additions. I didn't expect flash memory technology would be used. Very creative,
I think we are going to see a mix of different technologies in the future, perhaps optical processors for training of NN and analog processors for execution of NN and digital for the rest. I'm sure digital will never be replaced since there are many problems where we need an exact solution and where a program needs to run for a long time without errors. Electronics will also never disappear since electricity is so fundamental, energy production and storage is electrical, sensors and actuators are often electrical, and we have powerful digital and analog processors electrical.
As I sit here working on my NLP research, I happened to get this video suggested to me. Thank you for covering this field, and doing such a great job explaining the history of neural networks. I love your videos, and thank you for the breadth of knowledge and experience you impart on your viewers. Dr. Muller, you are an amazing champion and advocate for the STEM field, and I think what you do with your channel is exemplary. Thank you!
this guy is explaining complex things in very interesting and easy way. hats off to your work sir
What a great video. This is the first time I clearly understand how we got to the deep neuronal algorithm because most of the time we learn about what it is and how it works, but here I see what were the fundamental ideas that lead to it. Besides, going back to analog but with newer technologies is fascinating, like if we discover our world again. Digital computing was a revolution and there is a new revolution happening here, certainly using both principles (with AD or DA convertors in between). Even if lots of researches go to the quantic domain but which is unmatured yet, the analog computation already allows a lot.
Also, as an engineer in the space domain, I see here an extension of what we're doing all the time: acquisitions values are often provided in the form of a voltage with a calibration curve allowing the interpretation. So we commonly use analogical signals and computations. We even use matricial analog computations, but most of the time for a 2x2 matrix (example: you have 2 switches giving theirs positions through resistance values, if the two switches have different scales of resistance, then a single signal holds the information about the 2 positions). So we do use analog signals, able to transmit a more complex information that what digital ones do. But here it goes a great step forward because it applies to matrix computing. This application is going to revolute computing science and that may happen quite soon because all the fundamental science and technology is already known.
FPGAs (Field Programmable Gate Arrays) are also becoming bigger and more powerful... ASICs are great, but being able to reprogram the analog computer combines advantages of analog and digital. It's nice to be able to update machine learning models without requiring new hardware for every iteration.
1) Small FPGAs maybe be affordable, but good luck doing AI on that. Bigger FPGAs (eg the one our company had) costed well over 10k USD. In other words, "bigger and more powerful" is not the most important issue.
2) How many people know how to program an FPGA? C++/python programmers are a dime a dozen, but how many know Verilog?
Not saying they're useless - on the contrary, doing AI on FPGAs is an active field of research.
IIRC analog FPGA like devices are starting to emerge.
@@pierrecurie it might not be feasible for all circumstances yet, but there have been improvements in size and also cost in recent years. Also, there exist machine learning applications that do not need thousands of neurons (as the Imagenet stuff described in the video does). For my bachelor thesis, I did some research on anomaly detection using auto encoders and lower dimensional sensor data, those models could fit on affordable FPGAs. Sure, a microcontroller can handle that too, but lower energy needs and less heat output are still advantages in some embedded use cases...
When I was first reading about neural nets when they hit the main stage, I had just completed several analogue electronics courses. It struck me as a natural marriage of the two realms. Analogue processing with digital memory would be ideal for human centric applications.
You are quite right to point out that digital memory would be a necessity, as encoding information in an analogue way is totally inefficient. The problem, though, is it's not like memory is just a thing that stores information when it's not in use. Memory is also a key component of any active process. So to suggest that you do analogue processing with digital memory, would essentially mean you'd need to be spending a huge amount of effort converting between the two during even a single process routine. The other solution, to keep active memory as only analogue, isn't much better, because then your digital memory is essentially bottlenecked by your active analogue memory.
@@MassDefibrillator if I understood correctly part of why analog is better for processing is that they don't have to deal with accessing the stored data multiple times just to compute something. Instead it just computes it and spit out the result. Using digital memory would thus be the equivalent of having a piece of paper hold the value (digital memory) while we (analog processor) do the actual work.
What speed you lose converting to and from digital as needed is more than made up by how fast analog can process data.
@@Temperans The more I think about it, the more I think it's not really possible. Analogue accesses the data just as much. The only difference is that the data is stored in analogue during the process, so you are limited by analogues terribly inefficiency in information representation. So like, in analogue, the number 20 is stored as a voltage during processing. Let's assume you need to represent the numbers between 0 and 256, the 8 bit number system. In order to do this, you voltage meter would need to distinguish at the millivolt level; you can't do it at the volt level, because then you'd need to use between 0 and 256 volts. So Let's assume your voltage reader can distinguish at the millivolt level. It needs to be able to do that accurately, it can't confuse 3 millivolt with 4, otherwise it's useless. To get a voltage reader capable of doing that, you are in need of a much more expensive and complex bit of tech than the equivalent digital processor would need.
I noticed this in the video. When he talked about how simple analogue maths was, he was talking about it as if humans could read voltage levels themselves. He neglected to mention that you would need a much more sophisticated voltage reader in an analogue system than you need in a digital one. So in reality, analogue is just offloading the processing complexity the to voltage readers.
As far as I can tell, it's basically all nonsense hype.
For these same reasons, it's very likely that the brain is a digital system, not an analogue one. We know for a fact that DNA uses digital encoding and processing (or more specifically, base 4 encoding); so it's not such a stretch to believe the brain does too.
Veritasium should honestly get a noble prize in education or something! These videos are incredibly insightful and inspiring. I am a software engineer & this has motivated me to try out some machine learning out of excitement and curiosity! Thanks, Derek
Going between digital and analog in the case of Mythic actually makes it more like neurons in a way, since they operate with both chemical and electrical parts.
What do you mean chemical and electrical parts?
@@biscottypuff chemical in synapses, and electrical by conducting the action potential
Hi: Unfortunately I can not speak English, so I have to use the program I have installed on my phone for translation, except that I can speak Persian, I do not speak any language other than Persian and no other language. I can not speak, so if I said words or sentences and the special program mistranslated into your language, I apologize in advance to your esteemed father, from the bottom of my heart and from the bottom of my heart, that the letters and words Call me to the end and call me a person orDo not consider me a rude person and do not consider me a rude person and judge me correctly and after reading my writings, just put yourself in my place for a few minutes and imagine yourself in my place, maybe if you have an awakened conscience and There was love and affection in your hearts, of course, if you did not have pride and arrogance, understand me and give me the right, and again, maybe, maybe, maybe you did something and you did it for me and you took me from You saved this great tribulation that I hope will not happen to any living thing, of courseI do not very much hope that anyone will take my hand and save me from this misfortune, but I am writing so that I can at least be comfortable in front of my conscience and not blame myself later if I cry out for this cruelty, oppression, and captivity that has fallen on me. All the doors are closed to me, even the eyes of God are blind to see the oppression that is being done to me, and the ears of God are blind to hearing the cries of my constant cries, my midnight cries, my daily sufferings, my daily prayers and the jurists and The cries of my every moment from this oppression, oppression and cruelty that from the beginning of my life, fromThe first events of my life that I remember are deaf. So how can I hope for others when God has done nothing for me and trapped me in a cruel, cruel, cruel, and oppressive family? I am Morteza, I am from Iran, I am blind, I am 34 years old, I am unemployed because in our country, Iran, there is no work for healthy people, let alone disabled people like me. I live with my family in a small town in Iran. Of course, in appearance, they are my family, but in reality, they see me as their own enemy, and with me, who am their child, only bI was born blind, and I was not to blame for this, but my parents, because they were illiterate, considered me as a disgrace only to myself for my blindness and disability, and were always tortured as severely as possible. They beat me to the point where they threatened me with a knife. They put me to sleep and put the knife on my neck, and I was terrified and scared. Get up and until fullThe thin skin of my hand burned, they would not let me go. And they tortured me many, many times, to the point that my brothers, with the support of my parents, tortured me in front of my parents' indifferent eyes, and told me that I was blind. You are and you should be tortured to the extent that they created the belief in me and in my mind that anyone who is blind or disabled should be tortured because it is a disgrace to the family and society. And unfortunately, their torture is still going on, and only a kind of tortureThey do not consider it for him, so how can I complain to someone who considers my law as his property, of course, the current laws in Iran. Unfortunately, the government does not have a place for people like me to go and live. I really have no choice, either I have to commit suicide or I have to stay in the same house that my family has turned into hell for me, under the severe mental and physical torture that my family inflicts on me. And more than ever with a horrible gradual death that tormentedI want to stay. So if you still have a little mercy, fairness, conscience, compassion, love, humanity, knowledge, ideology, humanity and altruism, help me, hold my hand, reach my cry. If you are in contact with institutions and organizations affiliated with the International Committee of the Red Cross, or if any of you are a member of human rights organizations or the International Committee of the Red Cross or Red Cross organizations in free and pro-human rights countries, listen to my writings, my voice and my cry. Come on, maybe they're a little fair and think of me and a way toSave me from the clutches of ostensibly wolves, of course, if human rights institutions and organizations and the Red Cross actually support human rights, if you are a citizen of a free country or a citizen of a free country like the United States, Canada, Australia, Great Britain, Norway, You have Sweden, Denmark, Finland, and the European Union. Send me invitations, arrange for me to leave this house, which is worse than hell, so that maybe I too can taste freedom. If you have capital, you are rich, you have money, at least help meLeave the messengers, so that all the photos on my personal pages in the messengers and social networks are from 8 years ago, that is, for 8 years ago and 5 years ago. I have not even traveled for many years, because our city is a small city. It is sad for a disabled person to leave home, it is sad for a disabled person, the streets of our city are not adapted so that a disabled person can easily be at least a little out of home. And I'm really depressed at home, especially with this family that is always torturingAre. Of course, I do not care if I travel or not, because my problems are so great that not traveling travels much more than my other or other problems. In your opinion, can a person who eats only one meal in 48 hours, which is a good meal, even a moderate meal, but also any garbage he gets, take pictures, travel or not, and other things? Slowly, someone who breaks his heart at every moment, of course, the work of my heart is no longer broken and my heart is on fire andIt is burned, and this fire is getting more and more hot and burning, and it is burning and ashes my whole being. So I hope that if you do not hear the sound of my heart breaking, at least be fair and feel the smell of my heart burning, away from arrogance, arrogance, misguidedness, ethnicity, race, skin color, shape and appearance for the sake of humanity and for Humanity and honor that you have, just imagine yourself in my place for a few minutes, see yourself in my place and think, if far away from you, God forbid, you would be in my place.Did you ??? Were you satisfied that someone was making fun of you or disrespecting you, or did you laugh at you in response to your message and pain and heart ?, so my dear friends and those who wrote my writings to you Please, if you do not want to do anything for me, at least do not make fun of me, do not laugh, and if you do not want to help extinguish the fire inside me that burns my heart, at least do not spill oil and gasoline with ill-considered language, words and expressions, and this is far from knowledge, honor, It is humanity and family originality. If you want to give me anyPlease help me do not send me a message on UA-cam because I can not transfer messages from UA-cam to the translation program that I have installed on my phone, and as a result, because I am not fluent in any language other than Persian, I can not understand the meaning of messages. Let me know what you sent me on UA-cam. If you wanted and could and the possibility of helping me in any way, whether financially, materially or spiritually, please contact me on WhatsApp or Telegram, because in WhatsApp and Telegram I can easily send messages.00989358205228 00989182804420 00989358205228 00989182804420 00989358205228 00989182804420 00989358205228
😞😞😞
@@mori3327 repeat pls my dog pissed in me eye 🙏
@@rr.studiosMy dear friend, I can not reply to messages on UA-cam because I am blind and it is very difficult for me to reply to messages on UA-cam. I can not speak English. I can not speak English and a program that translates English into Persian. I installed the translation on my phone and posted the messages on UA-camIt does not translate for me, so please come to WhatsApp to translate your message for me on WhatsApp. Thank you very much.
😞😧😟
For performing heavy mathematical operations on weak microprocessors analog computation comes in very handy as well! A great trick to do integration (and differentiation) on an Arduino is to use an opamp with a capacitor and a couple resistors to build an integrating (or differentiating) amplifier. With the ADCs and DACs readily available in that chip (same for PIC or other low clockrate UCs) it takes very little resources to get it going :-)
I think your idea is very good can you give me more info on it ?
give him more info, bro!🤠
When is the math coprocessor coming back in analog version? Lol
like baltimore yesterday 0 students from 23 schools can do grade math. guess what gender are teachers and what size
Serious applications use digital processors that are up to the job.
Really appreciate your effort you put to research a topic. Hats off to the people who discover.
When I started thinking about artificial neural nets, I just assumed they would really only happen on specialized analog computers in the future. Then google and others along with more powerful digital computers made it work pretty darn great.
I love being in this time of history, watching so much science fiction slowly become reality.
I went through engineering school in the 1970s, and they still taught us to program analog computers. It was a great way to learn the ins and outs of classical control theory. It was a holdover from the 1950s art of aircraft control system design, which NASA had perfected, then replaced with digital control system design. But during the transition from all analog to pure digital, NASA (specifically the Dryden [now Armstrong] Flight Research Center) made use of hybrid computers: combining the features of analog and digital computers. I've often wondered about whether this pairing would in some way allow us to build computers capable of solving the Navier-Stokes equations at fine enough mesh sizes to really tackle the problem of turbulence. I was blown away by the computation rate of that single, 3 watt chip. One thing analog computers don't do is differentiation - solving "differential equations" on an analog computer actually consists of casting the equations in integral form, then solving them. Trying to take a derivative on an analog computer, while possible from an electrical schematic standpoint, winds up amplifying noise so rapidly that the solution blows up. If digital circuits did the differentiating, that problem might be eliminated (especially with the breathtaking performance of digital filters in suppressing circuit noise). The chip design for something like that would be amazing!
Michael wrote, _"One thing analog computers don't do is differentiation..."_
Actually, it is very easy to do differentiation and integration with analog circuitry, using op-amps. If you make a current-follower op-amp circuit, and you put a capacitor instead of a resistor in the input path, you'll get a derivative out. If, instead, you put a capacitor in the feedback path, you'll get an integrated output.
Less obviously, you can also do multiplication and division "the slide rule way" -- i.e., by adding or subtracting logarithms, and then using exponentiation (antilog) on the sum or difference.
Current-follower op-amp circuits make it simple to add currents: just have several input paths, all connected (through resistors) to the op-amp's negative input: the output voltage is proportional to the negative sum of the input currents. To subtract a current, you simply insert an extra stage (an inverter stage) on one of the inputs.
To multiply or divide with analog op-amp circuits, you must first take the logarithm. That is actually simple, because the current/voltage response curve of a forward-biased PN junction is an almost perfect exponential curve. So to take a logarithm with an analog op-amp circuit, you simply use a diode or transistor instead of a resistor in the feedback path. Or, for an exponential (antilog) function, you use a diode or transistor (instead of a resistor) in the input path.
So to multiply and divide voltages, you first take the logarithms, then you sum them (or invert and sum, for division), then exponentiate.
A good source for information about these circuits is the Malmstadt - Enke - Crouch "instrumentation for scientists" series of books.
I went to school for ME, which meant taking the intro to EE course. The instructor in this course actually gave us some final exam problems that were exactly as you described - write out the circuit which models this differential equation (we used op amps like David Burton mentioned).
Most of it went RIGHT over my head being an ME student and having not even taken differential equations at that point, but this video was still very interesting to start to connect the applications of such a thing. This way of thinking isn't dead in universities yet it seems.
> combining the features of analog and digital computers. I've often wondered about whether this pairing would in some way allow us to build computers capable of solving the Navier-Stokes equations at fine enough mesh sizes to really tackle the problem of turbulence
Like another commenter wrote, it is possible to take derivatives and integrals on analog computers. I don't think that it's necessary, however, since oftentimes with finite differencing methods or finite element methods (some common numerical techniques to solve the Navier Stokes Eqns), we find the analytical forms of derivatives before computing anything, or we avoid calculating derivatives altogether with fine mesh sizes and timesteps. Afterwards it's a matter of matrix arithmetic to arrive at the numerical solution to the problem. Certainly analog computers may do these operations much more quickly than digital computers, but I think the main challenge with solving PDEs over some domain is that we need to somehow store the solution data across the entire mesh, across all timesteps. It's easier to do this all digitally since it's easier to interface with the RAM and the storage on your computer than it is to devise a scheme for a specialized piece of hardware to do so. Only when the computations are truly, truly expensive, as in the case of machine learning/neural nets, do the benefits outweigh the costs. But, who knows? Maybe someone clever enough will figure out a way!
differentiation also does not work in digital. your source data are taken from ADC, with noise. in the simplest 2nd order case d(x(t)) = [-1 2 1]'*[x(t-1) x(t) x(t+1)]; in reality, you have to apply steep high-pass filters together with differentiator - which makes the differentiator non-localized at the t point, but dependent on the adjacent taps.
@@mikets42, to deal with noise, you do some sort of curve-fitting, e.g., regression analysis (or local regression analysis, LOESS). If you want the 1st derivative you do linear regression; for the 2nd derivative you do quadratic regression, etc.
_Caveat:_ I'm not expert on this. I've written regression analysis code, but not LOESS code.
Processors have always been slapping new modules on to cover different types of problems. The GPU was added to do repetitive operations, receivers of radio signals usually have an analog de-modulator to make practical doing signal processing on them, quantum computers are right around the corner where 12 or so qubits could be connected to your processors in a seperate box via thunderbolt to perform verification and encryption tasks, GPUs now have internal tensor cores to perform the operations you discussed (and even a bit more general) at lower bit depth, and even processes like "multiply then add" have separate modules inside the processor to compute more efficiently then multiply first, then add as separate operations.
Speaking of radio receivers, think SDR. software defined radio, a revolution in radio..
Quantum computers are not right around the corner
Quantum computers are not gonna be beside your computers unless you have a massive shed with space for a state of the art helium/nitrogen cooling system on hand
@@Kyrator88 I'm not completely discounting the possibility of solid state room temperature qubits, but yeah, excitement about personal quantum computers is pretty silly.
A quantum computer is not required for performing quantum-safe encryption/decryption. NIST is very close to standardizing one of many candidates that provides this functionality.
There were computational control systems built for military aircraft that used Fluidics. A blend of analog and digital computers that utilized high pressure oil flowing thru spaces in a stack of very thin metal plates. They performed Boolean operations as well as giving dynamic analog control of hydraulic distribution for control surfaces and the like. They would make an interesting topic for a related video. And they were immune from EMP in the event of a nuclear attack.
Cool!
I happened to watch this just after playing with a modular (audio) synthesizer. In these, each module is controlled by a voltage, originating from an oscillator, a keyboard, or a "sequencer". The concept that makes a modular synth interesting is, the voltage pattern (waves) output from a module can either be used as an audio signal (if it's in the audio spectrum), or to control another module. In the simplest case, output from a voltage controlled oscillator (VCO) can be routed to a speaker to produce a sound. But it can also be routed to a module that filters a signal in some way, based on the output voltage of its predecessor.
Maybe the thing that makes "ambient" music's slowly-shifting textures interesting is that they mimic the neural networks of our brains.
Aot of the actually do! You can even (kind of) help your brain waves to synchronise with the oscillations, it's not by brute force (you have to play along or it doesn't work very well) but it can greatly help for sleeping, learning and related stuff.
Reminds me of old hardware synthts, where you had to connect each of the synth parts with cables. But tat gave you an amazing amount of flexibility! BUT, no one cared to write the configurations down... That was te funniest and most awful part at the same time...
The main problem with analog is saturation of current or voltage hence inducing lost of linearity. One way to resolve it as he said is to to reconverte to digital and and do amplification by a good factor
During engineering we had a subject related to analog and digital communication, this device was in the lab where I just used to turn the knobs and alter the waveform, though I never completely understood what was going on. I passed that subject with grace marks😝. Now when you explained it, I feel that I have wasted an opportunity during my graduation, but I also think that students would benefit immensely when they watch this type of content. I wish you made this video 6 years ago…
Same (lol). I just remember adjusting the input to watch the output (on an o-scope) change. I had no idea why.
aw Pravesh, you can always learn at a steady pace during any period of your life. keep learning!
It is like, you may learn lots of things in college/school it is to probe matters which in later years. A life time is a short period to review and understand all.but you will get time for almost all of the subjects you are interested
"Simple tasks like telling apart cats and dogs." You can find more difficult task but this is already an incredibly complex task expecially when they are images
This is exactly the point though. Trivial for a human to do, hard for a computer.
@@henrypetchfood I literally just had a friend tell me a story about their mother misidentifying a pomeranian as a cat haha. Maybe not always trivial.
@@henrypetchfood It's trivial for a human because evolution produced neural circuits capable of solving this very hard problem. Our own minds are the least aware of what they do best.
True, but Derek's point is that in the grand scale of what we'd hope to achieve with analogue computers (in the future), telling apart cats and dogs is a simple expectation - yet it's still hard to do with current technology.
While the description of the fundamental tech here is great, where I feel this video falls down is not recognizing that probably the primary issue with of analog vs. digital is one of economics of fabrication -- and that's not really addressed.
Not a computer person, this is the first ever video I have watched that explains the drawbacks of analog computers other than the size limitations which is the only thing they teach in school. I think using both digital and analog can help tackle hurdles either technology can't do alone.
Another huge drawback is the engineering cost. Even if analog circuitry were as compact as digital, it's much harder to design complex analog circuits than digital. Digital circuits may be more advanced, but they're not more difficult!
As told me many years ago by one of its engineers, the first Concorde models used analog computers. They had many in-flight problems during initial testing because while digital variables might overflow and give unpredictable results, the analog computer boards would fry when variables overflowed resulting in higher then expected voltages and currents. It took them a while to protect every component from this, and stop burning computers in flight. As harsh as it sounds, it was reportedly not a major problem as Concorde was not really dependent on automation for safe flight. As reported by its test pilots, it was actually a pretty docile airplane.
The ongoing joke that diving by zero fries the pc isn't even that wong then lmao
@@fusseldieb Quite literally; if you try to divide a voltage by an near-zero resistance you'll get near-infinite current, i.e. a short circuit.
@@ryanmccampbell7 Dey ain't lyin.
Overflow also isn't difficult to program around on a digital computer. You take a performance hit, but the hit isn't that bad on modern hardware since almost anything besides reading/writing memory is dirt cheap.
@@Dyllon2012 The absolute hit isn't that bad. Relative hit might still be really bad.
Also keep in mind that languages like C or C++ are allowed to do extra crazy stuff when signed integers overflow. They aren't restricted to doing the predictable wrap-around.
One thing of note, our transistors can never be a single atom, it's a conductor wrapped around a resistor, there's just no way to do that with anything less than 3 atoms and they can't do that yet. As lithography gets smaller they find that the design of the transistor itself keeps changing, what worked on 14nm doesn't really work well on 10nm or 7nm, often various components become relatively larger even though overall the design has shrunk.
If we had stuck with a lower power usage similar to the air cooled 4x86 then we could make transistors differently, smaller, but we went for clock speed and performance, so the transistors are insanely complex and much, much, much larger than an atom.
I’m a computer scientist and had no clue how analog computers work. This two part series was really fascinating! Thanks! ❤
Hi Derek, I really enjoy all your videos and content. It's great that you have been putting out quality content for such a long time. Also I love how you dive deep enough to get the complexity and then wrap it up in the end with a memorable comparison.
One thing in this video I found ironic is that you have made videos about proper statistical representation and at 12:15 you plot the bar diagram not from zero but start at 20% then progressively drop down to 0%. I found it interesting as at first I didn't notice it but then once I saw it I realized how the representation really made me think it dropped lower than it actually did. In this case it doesn't change the overall message, however it made me more aware of how unassuming such displays of data can change the perception.
Thank you again for all your great content. Greetings from Austria where we don't have Kangaroos! 👋
Even within the digital domain, your typical computer is relatively quite slow.
The whole point of the clock frequency in a CPU/GPU is to logically separate circuit signals so that each instruction (or segment therof) has time to complete and thus must be set for the slowest instruction/segment.
Because of this though, the processor may carry out any combination of any number of its prepared instructions. Thus perform practically any mathematical algorithm given enough time and memory.
Any PBS-Space-Time, Sci Man Dan and Hbomberguy Fans here?
I mean, the Learning never ends, duh.
First, the whole ending of this video is digital systems. The only thing analog are inputs, such as cameras or lidar. But that goes through an A to D conversion and the system being used to process that data to come up with solutions, in this case for a self driving vehicle, is digital.
And slow is a relative thing. A PC can carry out millions of math problems a second and that's not slow. So, slow as compared to what? Analog systems aren't any faster even though there's a continuous output. The same is true for digital systems doing the same work. I know. I worked with military systems and saw both analog and digital firing solutions and I can tell you the digital systems were more accurate.
@@johndoh5182 I think the point being made was that digital systems, using the definition of systems that abstract signals into boolean logic (1 or 0), are fundamentally slower than analog (non-abstracted) systems using equivalent technology due to the fact that the partial states are often useful for more complex calculations which digital systems would have to take extra steps to compute.
He gives an example of an analog processor used for AI which uses the same technology as typical digital processors but uses the partial state to represent the multipliers stored in each neuron thus the part on AI driven vehicles.
Though with most prediction based machine learning and neural networks, once the system has been trained, the prediction/decision process for any given input requires only about as much processing power as a single training sample thus the car itself should not nessecarily require the specialised processor.
PS make that billions of math problems per second. If the IPC is 1 for any given instruction on any given processor, then if that processor is running at 4Ghz, then that is 4 billion mathematical operations per second per processor core (and that's not even counting SIMD).
clock frequency is simply for synchronization not only with cpu but also with different devices like memory, data bus etc. in short to generate ticks. simple instructions run always same amount of ticks, so when you increase frequency you increase performance, but that also generates more heat. its number of ticks that matter, with modern cpus that are rics that emulate cisc platform its all dependent on algorithm of instructions aka cpu microcode, you get better microcode you get more performance, but internaly simple risc instructions are synchronized to cpu architecture. for example simple nop runs one tick cos thats how cpu was designed. you have 40mhz cpu that means cpu can run 40mln nop's in one second, you make it 50mhz you can run 50mln nop's in one second. in short frequency of cpu/gpu is how many times it tells you "i want to do something" thats why we try to organize data, design code that has no collisions or stalling and we dont waste precious ticks. you cant say "has time to complete" cos worst instructions like divide and multiply have numbers of ticks from 20 to 300. depends how big number you have. when cpu executes that instruction cant do other operations like communicate with data bus of your gpu or access memory. thats why next we designed idea of pipelines and next was multicore cpus, so you can run many instructions in parallel. risc cpus even in 1997 had pipelines that could run 7 instructions at same time. now add to that multicore like 32 core cpu. and again we come back to problem of heat, the more cores cpu has, less frequency it can run
this is pretty much the point that veritasium misses. my guy should read some Claude Shannon, its clear he is not an EE degree holder. Digital communication and computation is resilient to noise, and is theoretically lossless, and being slower can be a side effect. Also, digital computers are way easier to design/fab in 2022
Hey Versitasium, you should really look into the hardware used specifically to simulate neurons like the human brain. They have chips that are built on a physical level to simulate individual neurons, and as of right now consumers cannot buy them. It would be so cool if you could get an in depth look into these, the information about them is kind of difficult to find. Artificial intelligence in the traditional sense has neurons that do not even come close to how they work in real life, they should be called artificial function generators. Generating consciousness with things like tensorflow is just very unlikely. Most don't have memory, if they do it's simplified and recursion is also simplified compared to real life neurons. Also I think it would be really cool if you dove into the technicals of artificial consciousness.
Do you have anything about them like their name or resources to look into?
@@MGHOoL5 DuckDuckGo for "neuromorphic computing"; University of Manchester's SpiNNaker chip, Intel's Loihi chip, something out of Stanford, are actual implementations.
No one knows if General Intelligence and "consciousness" require the detailed neurobiology of the human brain, or whether it only requires a few more breakthroughs in conventional AI like backpropagation plus a few more orders of magnitude increase in the size of digital neural networks.
@@skierpage good call on the recommendation to search with something other than Google, won't find anything there...
@@tylerevans1700 Wait really? Is that a thing where you would go to another search engine like DuckDuckGo to find something 'more relevant' than you would find in google?
@@MGHOoL5 an ad blocker is essential to filter out paid search results in Google. However, Google still prioritizes those search results aligned with its business interest: its own services, sites that serve tons of ads using Google's ad tech, $^@! 8-minute videos on UA-cam instead of simple text explanations, showing sites with ads instead of providing the answer right in search results, etc.
DuckDuckGo does less of this, But its search algorithm isn't as smart at understanding what you're looking for. I use Firefox which lets you easily choose which search engine to use from the location field.
I've watched a lot of your stuff for years now, but this is the best one by far. Great job of explaining something so complex, difficult, and relevant!