From Wikipedia: "The Fredkin gate is a circuit or device with three inputs and three outputs that transmits the first bit unchanged and swaps the last two bits if, and only if, the first bit is 1." "It is universal, which means that any logical or arithmetic operation can be constructed entirely of Fredkin gates." It even shows how you can make AND, OR, and NOT from Fredkin gates. So it truly can replace everything. Also, oddly enough, you can implement them using AND, OR, NOT, and XOR. It's so weird. We NEED a video on these things, Sean!
ok, we need not only a video on these things, we prolly need every chip manufacturer to start using these gates instead. the initial investment cost would be out weighted by the reduction of the sustained energy cost by a lot.
Yeah, I've been programming for 30+ years (though as a hobby, not professionally) and this is the first time I've heard about these gates. I'd really like to see a video on them as well, please.
I think the power argument currently only works for quantum implementations of the gates. Power consumption of classical gates is still dominated by switching losses and leakage. Classical implementations of a Fredkin gate would still have these losses, so there would be no benefit. It is only when we could eliminate switching losses or reduce them so much that information loss becomes important, that Fredkin gates would be useful in everyday electronics.
"the initial investment cost would be out weighted by the reduction of the sustained energy cost by a lot." Would it? Be very careful. Either you store all those extraneous data indefinitely. Or you still erase it and incur the energy penalty. I mean, it's interesting and all. But it looks very impractical from here.
I'm not sure that's how it work. I mean maybe - but then perhaps programming paradigmsge would find ways to take advantage of that. Most likely actually, and we probably can't really understand how right now. I mean would you envision OOP if you're staring at a Turing machine (I mean the actual one he was working on?)
It's important to note that reversible computing is not a free lunch. The crucial bit he omitted is that a reversible computation with no energy input is a random walk, it diffuses forwards and backwards through the computation and may take infinite time to reach the output state you desire. However, you may add energy system to drive the computation forwards. So there is a fundamental tradeoff between energy cost and computation speed.
Your mantra at 0:50 is pretty much how I went through university. Also another good one that I follow is: "A complex thing is just lots of simple things put together" (which means that if it is too hard you haven't broken it down enough yet)
It's 1300 Hz. I put it through FL Studio (FabFilter Pro Q). You're a note off, it's a E6 not a D6 ;) Can't fool a decade-long producer :3 @gabriel - He played a D3 on the guitar and whistled an E6
I only used a spectrum analyzer from an app on my phone ("phyphox") and it probably caught the later part of the whistle, where the pitch drops quite a bit. But honestly D6 sounds way closer to me when I play it side by side.
He explains things like opening wikipedia tabs all over the place and closing them one after the other.. takes some focus to keep up with his goal but like it
This guy's infinite enthusiasm and unbounded love for his subject matter is self-evidently just way off the charts. See what I did there? Infinite and unbounded? Off the charts? Those things are just as true in terms of what he's talking about as they are of how passionately he tries to get things accross It's just that he does it all so much more spontaeously and infectiously than I ever could. Simply awesome.
5:10 : "Is that not just because we have two inputs and only one output?" LoL Phil has a very appropriate reaction here. That question was indeed a very neat insight about the whole thing.
@@seanspartan2023Yup the fredkin gate would allow for a bijection between the inputs and outputs permitting for a zero increase in information entropy. Pretty cool.
I'm no engineer myself, but I think you could swing it. Have videos for each of the various principles, explain the nuances of generation and transmission..there's certainly no lack of theory to cover!
how fast can you compute depends on which matters more core count or clock speed cause core count increases power a lot as you add more cores to the cpu count
Mathematically speaking, this reminds me of inverse functions. The system is reversible if every final position can be paired with at most one initial condition. Which in math terms is like saying the mapping from the initial condition set to the final condition set must be injective (i.e. must be a monomorphism).
"Here’s the fascinating thing: What costs the energy is not the computation itself, it’s erasing information." Phil once again blows my goddamn mind with something that ought to be obvious. I love you, man.
This video captures so many aspects of what makes science/math popularization work: - Both Phil and Sean are clearly excited about the topic. The synergy between them creates a dynamic tension surrounding the core principles being discussed. - The topic itself doesn't just cross the boundaries between math, computing and physics, it unifies and annihilates them. - The topic also neatly covers so many orders of magnitude, lending a perceptible scale to the entire topic. - The most advanced math used was the AND gate. The demos were a dead tennis ball and a guitar. So much from so little! - Fundamentals! "Long time, narrow frequency band. Short time, wide frequency band." Awesome. Well done!
This is one of the most interesting Computerphile videos I've seen in a long time. It's one of those topics that you don't ever hear about in when doing a bachelor's degree or working in software, but is still incredibly fascinating.
Dear Computerphie team. I have added accurate English captions as well as Russian captions about a year ago. Could you please review and perhaps publish them, should you be satisfied. Thank you!
That was MIND BLOWING. Totally understood exactly why he took so long to explain it, he had to set up prior knowledge at each point so you could follow along. Amazing that we have actually figured this out.
When sending Morse code (or any digital transminnion) you often run into the frequency vs bandwidth problem. You can use a verry narrow audio filter with slow speeds but that same filter is useless at high speeds. The filter resonates or "rings" and you hear a nearly steady tone and all information is lost.
While it's usually received by ear it is a digital signal, on or off. Its bandwidth is determined by the frequency it's sent and the time it takes to go from 0 to full signal. The faster both of these are the more transients or clicks there are and the more bandwidth it takes up. To take advantage of limited space Hams often crowd together only a few hundred Hertz apart. They need to hear as narrow a part of the audio spectrum to isolate just the signal they want to decode but too narrow a filter and the begining and end of each element will be less destinct making fast signals smear together. These problems occur in all modes of information transmission from analog TV and radio to digital signals on fiber optic cable. Hope that helps.
I think this is the best video I have seen on here. My favourite professor/presenter over from minutephysics talking about fundamentals of computer science. Just amazing. Thank you so much for this!
Ray Kurzweil estimates we'll reach the absolute limits of computing before 2100. That might not seem like a long time, but Kurzwel always points out that progress is always accelerating.
I did a search for "computational limits" and this is one of the few relevant videos. Great video. Thank you. I first read about the limits of computation in "The Singularity is Near" and I realized the connection to reversibility in the YT video "Quantum Computing for Computer Scientists". I would love to see more on this topic.
Well, the link seems really to be missing, though all the pieces of the puzzle are there. I am no physicist at all, but here is what I've sorta learned from other sources (copying from my other comment): Ok, so less time means more energy, but why is the energy limited? Just pump more and more. The reason is: you need something to transmit energy. Say, to perform an erasure, you need a photon. The shorter the time, the more energy this photon is to have. Now, E=mc2, so the photon's mass grows proportionally to the energy. Now, there is the limit for the mass a photon may have in our Universe - it is the critical mass, after which it will collapse into a black hole (that's why it went about black holes in the video). This sets the upper bound for the mass hence the upper bound for the energy hence the lower bound for the computation time.
I did(and still do) some of that to teach myself code. I'd concoct some sort of maths and try to translate it into Python. If I hit a wall(or relied on spamming if/else too much), I'd google better ways to do it and usually learn something new.
Thank you, that's the best short explanation of the Uncertainty Principle I've seen. I also took a while to come to that realisation, and any time anyone asks me to explain it I've gone for a very similar explanation to the one you gave there. Far too many "explanations" confuse it with measurement issues.
I have a take-home exam right now. Damn you Computerphile! That was his best video to date. There were sooo many interesting aspects of physics and information that was so well explained.
How is the spelling of the "fredgen gates" that he mentions at 5:54? I would like to read more about them but Googles autocorrection doesn't point me into the right direction.
My computer science professor told me that another aspect would limit the speed of computers: The speed of light. If we increase the speed of CPUs more and more, the electrons have to cross the CPU chip in a shorter and even shorter time. At some point, they will reach a physical speed limit (practically not even close to the speed of light), so they won't be able to travel the 2-5 centimetres of one edge of the CPU to the other during that single flop. According to the professor, if we reach that limit, our only chance to further increase computing speed would be parallelization of multiple of those close-to-the-limit chips.
I did the calculation for some Intel Core i7 chip (37.5 mm length, 3.7 GHz clock) 37.5 mm (that's 0.0375 m ), 3.7 GHz (that's 1/(3.7E9) seconds per flop) ==> (0.0375 m) / (1/(3.7E9) s) = 138,750,000 m/s (that's already 46.25% speed of light, and not in a vacuum but in solid matter!)
I understand you. I did mathematics as my undergrad, and I almost switched to electronics engineering, and I still don't understand why I didn't swap, because today I am doing a masters degree in microelectronics. But the mathematics background gives me a strong foundation, and a mind that can grasp anything. I advise anyone who wants to get into the hard sciences to first get a maths degree if you can, and only then go into your chosen subject, be it physics, chemistry, computer science, electronics, whatever. Maths is the queen of science as Gauss said.
Yay, I have the exact same mantra as a student. I always try to code stuff to truly "get it". If I can explain a concept to a dumb machine, I must know it.
I think codes exist for it, but the problem is that it's too computationally intensive for more than a few particles. This is because you have to represent the state of the system as a "probability distribution" over every possible arrangement of the particles, to take account of entanglement. (Caveat: not really probability, since it's complex-valued.) Sources: ua-cam.com/video/w7398u8G588/v-deo.htmlm8s en.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Particles_as_waves
As a programmer I do have say programming a (to me) complex bit of maths helps me understand so much more about it. I don't understand how this works, for years people have told me if you're programming you're doing maths, but to me personally it just feels so different, so much more friendly and specific.
it is adressed at the end of the video. i read the comments early. hadn't even noticed them. he likes his props it seems. it does help translate harder to understand concepts into much simpler ones.
Harry Potter, The Death of Expertise, Maxwell's Demon, and pink bunny ears are the essential accoutrements of any working physicist. Especially those into metal.
I don't know if anyone had already pointed this out, but 1 FLOP (floating point operation per second) isn't equivalent to 1 bit of operation per seconds used by the MIT paper. Floating point math are complex operation, and there are multiple types of floating point function. 1FLOPS on average roughly translate to 20,000 bits of operation per second according to some paper. So we are five orders of magnitude closer to the fundamental limit than this video suggested at the end. There is also another issue to compare FLOPS with figure given by this paper. Notice FLOPS is per second, While the 10^50 figure isn't divided by time. They simply convert a kilogram of mass into pure energy and calculate how much calculation this much energy can perform. When we talk about Laptop level we usually associated this with its power envelop. Based on some rough calculation from their numbers for a 100W laptop the fundamental limit would be around 3x10^31 FLOPS.
Well the argument about sexual dimorphism was one thing... I was thinking more along the lines of him not acting like a decent person when he made repeated personal attacks instead of proper debate.
Weird how it's always the people with no education or expertise in science that accuse actual scientists of denying biology. I don't believe he doxed anybody, I think you're making that up.
- в видео говорится про фундаментальный предел скорости вычислений - чувак рассказывает что он физик, но раньше программировал, "если я не могу запрогроммировать, я это не понимаю" - что компы делают? вводим данные, производим вычисления, получаем данные - обратимые (reversible) вычисления, аналогия с мячиком: теряется исходная информация, после вычислений и нельзя сказать что было на входе, так же как теряется энергия в механических системах - 3:54 есть прямая зависимость между обратимостью вычислений и содержимым выходных данных - 4:59 есть несколько вариантов как получить 0 - если юзать идеальный Вентиль Фредкина не будет потерь энергии при вычислении, т.к. потери появляются не при вычислениях, а при стирании информации - 7:46 если юзать Вентиль Фредкина, то всё равно упрёшься в принцип неопределённости Гейзенберга - 9:20 если нота звучит долго, можно легко понять какая у неё частота, если коротко - то сложно т.к. там много разных частот - 11:15 (с большим упрощением) есть зависимоть между частотой и временем - чем больше частота операций, тем больше энергии понадобится чтобы оперировать на ней - 12:20 в научной работе говорится про ultimate laptop - по сути это компьютер это плазма дико высокой температуры, это не суперкомпьютеры сегодняшнего дня - компы 2020 года = 1 ЭКЗАФЛОП (10^18). Компы 2030г = ЗЕТАФЛОТ (10^21). предел = 10^50 - сравнивает рост человека к размерам обозримой вселенной: разница 26 порядков - разница между скоростью вычислений 29 порядков (если сравнивать 10^21 к 10^50 ФЛОПС)
Does Phil teach? From start to finish he was throurough and his analogies were perfect. I feel like I could actually make it through a university level physics class if he was teaching. Thank you for the content everyone.
Another aspect, or perspective, on the lower limits of the computational scale in both space and time concerns being able to distinguish a Zero from a One. The smaller and faster your computing elements become, the more errors that are inevitably going to occur. Error correction techniques can mitigate these errors, but it soon gets to the point that the computation for error correction greatly exceeds the computation of the problem we seek to solve. We are already seeing this in quantum computer designs. One recent design required 28 qubits to create a single stable and reliable qubit for computation. In other words, only 3.6% of the qubits are used for computation, and 96.4% of the qubits are needed just to make the computation work in the real world. This is the "Law of Diminishing Returns", which is actually Murphy's Law writ large.
The information density had a high fluctuation, (lecturers have to fill time and say things multiple ways to cover the various ways that people learn) but the information is still useful.
These videos are just amazing. Congratulations to the professors and to whoever had the idea of making them (the videos, not the professors. Though they deserve some credit too (the professor makers, I mean)). They surely make me wish I had studied at Nottingham.
I don't agree with phils political views either but would you leave the hate out of a video where he's not spouting his political views - its educational there's nothing to dislike about it
Nnotm, if he is that kind of person I would find it odd if people complain that people do the same thing to him. People deplatforming are trying to make someone a social pariah.
That Lloyd paper is a good read. Some notes / takeaways: * The ultimate laptop is arbitrarily chosen to be 1 kg and 1 liter, which gives it 10^51 ops/second and 10^31 bits of memory. While processing power is simply proportional to mass (here it's 10^51 ops/second), the memory is more complicated, but for a fixed energy density it is an extrinsic property, i.e. double the energy and the volume will give double the memory. * The mass & volume choice also sets the "degree of parallelization." For his choices the system is highly parallel (10^10 degree of parallelization). It would be very inefficient if given a serial task. If we want to do serial computation, we compress the computer which reduces its memory and parallelization. At the extreme, a black hole is fully serial, and for 1 kg it would store 4x10^16 bits while still achieving 10^51 ops/second. * Then there's the matter of waste / energy consumption. Lloyd notes that error correction will require eliminating incorrect bits. Any removal of information to the environment comes at an energy cost (it's an irreversible operation). And there is a limit to how quickly we can do this (analogous to a computer's ability to cool itself) which suggests that the ultimate laptop can't handle more than 10^(-10) errors/operation. If it's at this limit it will also consume 4x10^26 watts of power. For systems that are less parallel, this is less of a problem (error rate threshold ~ 1/(degree of parallelization)). Two comments: * This applies to both quantum and classical computers (and presumably any yet-to-be-discovered type). However, the benefit of quantum algorithms is that they can change the cost formula, e.g. Grover's algorithm does a search which would take N operations classically in just sqrt(N) operations. Therefore, even if the number of operations that we can perform is physically limited, there is no clear limit on what we can actually do with a given number of operations; it changes anytime we discover new algorithm types that have lower costs. * Personally, I'm skeptical about reversible computations. Intuitively if we perform some complex operation on a reversible computer, all the initial information must still be present at the end for reversibility to work. Yet so much of 'intelligence' follows this pattern: consume lots of data, filter it to find something interesting, and then do lots of processing (often expanding the data) on that specific finding. In a reversible computer, all that initial data that was filtered out has to remain in the computer, as what basically amounts to garbage bits. I could be wrong, but I think all emulations of irreversible algorithms with reversible gates result in this sort of garbage. At some point we will have to clear these bits to make room for more processing, and doing some comes at the standard kTln(2) energy cost per bit. So instead of just considering the removal of error bits, I would argue that if we're interested in performing any useful/intelligent calculation, there is going to be some rate of cleared bits per operation in order to complete the calculation and bring the computer back to its initial state (in which the memory is not full of garbage). It will be characteristic of the algorithms/computations being done, but probably much higher than the 10^(-10) error rate required for this ultimate laptop to operate (which would just mean we would have to settle for a somewhat less 'ultimate' version).
Have a total geekcrush on dr. Moriarty! :) This video touches some very interesting and far out points in a very nice and understandable way. Am a computer engineer, love the phisics in this video. Thank you for a great work you do at Computerphile and the Nottingham University.
9:45 I assume that's why you can play drums over any key track and tuning drums is not essential because the time is short and frequency range is wide. But then what about cymbles?
Jan U Probably because he's a criminal mastermind, whose intelligence is only beaten by Sherlock Holmes. Oh, you were talking about Professor Phil Moriarty? No idea.
Ah, but don't you want A to be the LSB and B to be the MSB? And you wouldn't put the B column before the A column, would you? (Yeah, it annoyed me too.)
"The only way somebody could possibly dislike this video is if they're a fanboy!" - Not the words of a fanboy, according to you. I mean, let's be honest - if Thunderf00t's fanbase *wanted* to brigade this with downvotes, I think they'd be able to to a bit better than 193. But, no, you think that every downvote ever has to ultimately trace back to him because that's not a ridiculous position at all.
From Wikipedia: "The Fredkin gate is a circuit or device with three inputs and three outputs that transmits the first bit unchanged and swaps the last two bits if, and only if, the first bit is 1."
"It is universal, which means that any logical or arithmetic operation can be constructed entirely of Fredkin gates."
It even shows how you can make AND, OR, and NOT from Fredkin gates. So it truly can replace everything.
Also, oddly enough, you can implement them using AND, OR, NOT, and XOR. It's so weird.
We NEED a video on these things, Sean!
ok, we need not only a video on these things, we prolly need every chip manufacturer to start using these gates instead.
the initial investment cost would be out weighted by the reduction of the sustained energy cost by a lot.
Yeah, I've been programming for 30+ years (though as a hobby, not professionally) and this is the first time I've heard about these gates. I'd really like to see a video on them as well, please.
I think the power argument currently only works for quantum implementations of the gates. Power consumption of classical gates is still dominated by switching losses and leakage. Classical implementations of a Fredkin gate would still have these losses, so there would be no benefit. It is only when we could eliminate switching losses or reduce them so much that information loss becomes important, that Fredkin gates would be useful in everyday electronics.
"the initial investment cost would be out weighted by the reduction of the sustained energy cost by a lot."
Would it? Be very careful. Either you store all those extraneous data indefinitely. Or you still erase it and incur the energy penalty. I mean, it's interesting and all. But it looks very impractical from here.
I'm not sure that's how it work. I mean maybe - but then perhaps programming paradigmsge would find ways to take advantage of that. Most likely actually, and we probably can't really understand how right now.
I mean would you envision OOP if you're staring at a Turing machine (I mean the actual one he was working on?)
It's important to note that reversible computing is not a free lunch. The crucial bit he omitted is that a reversible computation with no energy input is a random walk, it diffuses forwards and backwards through the computation and may take infinite time to reach the output state you desire.
However, you may add energy system to drive the computation forwards. So there is a fundamental tradeoff between energy cost and computation speed.
@Hubert Jasieniecki Not for fully reversible computing, that's the whole point of it.
@@andrewberger1882 how would any computing (reversible or not) operate without energy input?
Your mantra at 0:50 is pretty much how I went through university.
Also another good one that I follow is: "A complex thing is just lots of simple things put together" (which means that if it is too hard you haven't broken it down enough yet)
This.
Second this
9:34 The whistle was roughly 1180 Hz
The closest musical note is a D6
What is its relationship to the note he was playing on the guitar?
It's 1300 Hz. I put it through FL Studio (FabFilter Pro Q). You're a note off, it's a E6 not a D6 ;)
Can't fool a decade-long producer :3
@gabriel - He played a D3 on the guitar and whistled an E6
I only used a spectrum analyzer from an app on my phone ("phyphox") and it probably caught the later part of the whistle, where the pitch drops quite a bit. But honestly D6 sounds way closer to me when I play it side by side.
D6 = E6 Baroque.
Informæætion
That is out iPod ????
Give the guy a bræk...
@@rich1051414 bræk means vomit in danish
Celtic information > information
Imagin æ æ æ tion 🌈
He explains things like opening wikipedia tabs all over the place and closing them one after the other.. takes some focus to keep up with his goal but like it
'I have a theoretical degree in physics'
It does sound funny when you think about it. :)
Brilliant
so do I. you'll have to theorize my degree in order to see it, though...
That makes you smarter than neil degrasse tyson if you reckoned Christ as your savior
man of culture spotted
This guy's infinite enthusiasm and unbounded love for his subject matter is self-evidently just way off the charts. See what I did there? Infinite and unbounded? Off the charts? Those things are just as true in terms of what he's talking about as they are of how passionately he tries to get things accross It's just that he does it all so much more spontaeously and infectiously than I ever could. Simply awesome.
5:10 : "Is that not just because we have two inputs and only one output?" LoL Phil has a very appropriate reaction here. That question was indeed a very neat insight about the whole thing.
Agreed... Everything boils down to injective functions
@@seanspartan2023Yup the fredkin gate would allow for a bijection between the inputs and outputs permitting for a zero increase in information entropy. Pretty cool.
What an excellent presenter. Had me engaged from start to finish.
Chris Jones Agreed
Me too.
check out his other videos in sixty symbols
It's Professor Moriarty in the flesh, of course he's excellent :D
I could do with an electrical engineering phile
Would watch
EEVblog
DanieleGiorgino I like the computerphile/60 symbols structure
I recommend BigClive
I'm no engineer myself, but I think you could swing it. Have videos for each of the various principles, explain the nuances of generation and transmission..there's certainly no lack of theory to cover!
One of the best episodes, IMO. It tied several interesting concepts together very nicely. Great job!
how fast can you compute depends on which matters more core count or clock speed cause core count increases power a lot as you add more cores to the cpu count
Mathematically speaking, this reminds me of inverse functions. The system is reversible if every final position can be paired with at most one initial condition. Which in math terms is like saying the mapping from the initial condition set to the final condition set must be injective (i.e. must be a monomorphism).
"Here’s the fascinating thing: What costs the energy is not the computation itself, it’s erasing information."
Phil once again blows my goddamn mind with something that ought to be obvious. I love you, man.
"Uncertainty principle" Is great name for progressive technical death metal.
Prog never ceases to amaze me
The thought of a band named “Uncertainty Priciple “ sounds awfully hipster.
Sounds like some christian rock band.
Makes me think of the classic Thrash Metal album by Kreator - Terrible Certainty. ua-cam.com/video/ulbFJ0Cvfic/v-deo.html
who listens to Tiamat in 2017 ? :D interesting to hear that 10 to the 50 number.. will robots then replace humans?
This video captures so many aspects of what makes science/math popularization work:
- Both Phil and Sean are clearly excited about the topic. The synergy between them creates a dynamic tension surrounding the core principles being discussed.
- The topic itself doesn't just cross the boundaries between math, computing and physics, it unifies and annihilates them.
- The topic also neatly covers so many orders of magnitude, lending a perceptible scale to the entire topic.
- The most advanced math used was the AND gate. The demos were a dead tennis ball and a guitar. So much from so little!
- Fundamentals! "Long time, narrow frequency band. Short time, wide frequency band." Awesome.
Well done!
Teacher: “You there. Explain what Quantum Physics is”
Me: “Hold my acoustic guitar”
This is one of the most interesting Computerphile videos I've seen in a long time. It's one of those topics that you don't ever hear about in when doing a bachelor's degree or working in software, but is still incredibly fascinating.
Yup, somehow never heard of it in my bachelor’s CS degree
Dear Computerphie team. I have added accurate English captions as well as Russian captions about a year ago. Could you please review and perhaps publish them, should you be satisfied. Thank you!
That was MIND BLOWING. Totally understood exactly why he took so long to explain it, he had to set up prior knowledge at each point so you could follow along. Amazing that we have actually figured this out.
When sending Morse code (or any digital transminnion) you often run into the frequency vs bandwidth problem. You can use a verry narrow audio filter with slow speeds but that same filter is useless at high speeds. The filter resonates or "rings" and you hear a nearly steady tone and all information is lost.
Can you elaborate? Morse code is traditionally an analog system.
While it's usually received by ear it is a digital signal, on or off. Its bandwidth is determined by the frequency it's sent and the time it takes to go from 0 to full signal. The faster both of these are the more transients or clicks there are and the more bandwidth it takes up. To take advantage of limited space Hams often crowd together only a few hundred Hertz apart. They need to hear as narrow a part of the audio spectrum to isolate just the signal they want to decode but too narrow a filter and the begining and end of each element will be less destinct making fast signals smear together. These problems occur in all modes of information transmission from analog TV and radio to digital signals on fiber optic cable. Hope that helps.
Related reading:
en.wikipedia.org/wiki/Nyquist_rate
en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem
+JamesHaskin The system itself is digital. So what you have shown is you don't know the difference between an analog system and a digital system.
I think this is the best video I have seen on here. My favourite professor/presenter over from minutephysics talking about fundamentals of computer science. Just amazing. Thank you so much for this!
I was listening to Prof. Moriarty while working away on my own stuff and I look over and suddenly there's a guitar.
"is that not just because we've got 2 inputs and only 1 output"... quite possibly the best question i've heard on any related channel... fantastic
so when do we get consumer level machines with 10^50 flop processors? and can it run crysis?
it can run crysis at roughly 10^37 fps
written out that's
~10000000000000000000000000000000000000 fps
Oh... it'll CAUSE a crysis.
Crystal meth maybe.... j/k :)
Ray Kurzweil estimates we'll reach the absolute limits of computing before 2100. That might not seem like a long time, but Kurzwel always points out that progress is always accelerating.
@@Dirtfire I seriously doubt that since to hit the limit you need to use systems unusable safely within the solar system
I did a search for "computational limits" and this is one of the few relevant videos. Great video. Thank you. I first read about the limits of computation in "The Singularity is Near" and I realized the connection to reversibility in the YT video "Quantum Computing for Computer Scientists". I would love to see more on this topic.
I'm only a recreational physicist. Is this why I can't find the link?
+MyyMeli ua-cam.com/video/mBdCE5hOexM/v-deo.html
Thank You!
Well, the link seems really to be missing, though all the pieces of the puzzle are there. I am no physicist at all, but here is what I've sorta learned from other sources (copying from my other comment):
Ok, so less time means more energy, but why is the energy limited? Just pump more and more.
The reason is: you need something to transmit energy. Say, to perform an erasure, you need a photon. The shorter the time, the more energy this photon is to have. Now, E=mc2, so the photon's mass grows proportionally to the energy.
Now, there is the limit for the mass a photon may have in our Universe - it is the critical mass, after which it will collapse into a black hole (that's why it went about black holes in the video). This sets the upper bound for the mass hence the upper bound for the energy hence the lower bound for the computation time.
Best Computerphile show yet IMHO.
Prof Moriarty has always been one of my favourites, but I think this might be his best video yet.
I like it when Prof Moriarty gets excited. Granted, it doesn't take much.
Wow, I got goosebumps at the end of the video when you explain how far away we actually are from the computational limit.
coding as a way of explaining maths… now that would make an interesting series :-)
Bill Todd machine learning is all about programming maths.
I did(and still do) some of that to teach myself code.
I'd concoct some sort of maths and try to translate it into Python. If I hit a wall(or relied on spamming if/else too much), I'd google better ways to do it and usually learn something new.
Coq
Thank you, that's the best short explanation of the Uncertainty Principle I've seen. I also took a while to come to that realisation, and any time anyone asks me to explain it I've gone for a very similar explanation to the one you gave there. Far too many "explanations" confuse it with measurement issues.
Could we actually get an electric engineering phile channel?
Ramonatho as an electronics engineer, I approve!
Ashwith Rego as a cs Student i still approve
I think UA-cam channel W2EAW has excellent electrical engineering content.
ELECTROBOOM =)
I have a take-home exam right now. Damn you Computerphile!
That was his best video to date. There were sooo many interesting aspects of physics and information that was so well explained.
They asked me how well I understood theoretical physics. I said I have a theoretical degree in physics. They said welcome aboard!
Well, I wish youtube and this channel existed back in the days when I was an undergrad.
Love you, guys
How is the spelling of the "fredgen gates" that he mentions at 5:54? I would like to read more about them but Googles autocorrection doesn't point me into the right direction.
+severalthngs en.m.wikipedia.org/wiki/Fredkin_gate
Computerphile Thank you for the link and all the hard work that you invest in the production of this videos. Keep going!
Computerphile you don't think that you can just casually mention these and walk away do you? 😃 we need to have an episode on these
This man has a truly awesome gift for information transfer! I envy his students.
I think this is Phil's best video yet.
My computer science professor told me that another aspect would limit the speed of computers: The speed of light. If we increase the speed of CPUs more and more, the electrons have to cross the CPU chip in a shorter and even shorter time. At some point, they will reach a physical speed limit (practically not even close to the speed of light), so they won't be able to travel the 2-5 centimetres of one edge of the CPU to the other during that single flop.
According to the professor, if we reach that limit, our only chance to further increase computing speed would be parallelization of multiple of those close-to-the-limit chips.
I did the calculation for some Intel Core i7 chip (37.5 mm length, 3.7 GHz clock)
37.5 mm (that's 0.0375 m
), 3.7 GHz (that's 1/(3.7E9) seconds per flop)
==> (0.0375 m) / (1/(3.7E9) s) = 138,750,000 m/s (that's already 46.25% speed of light, and not in a vacuum but in solid matter!)
@@DeBukkIt But does CPUs work by electrons moving around the whole CPU on a frequency-derived velocity?
Yes to electrical engineering-phile!
I understand you. I did mathematics as my undergrad, and I almost switched to electronics engineering, and I still don't understand why I didn't swap, because today I am doing a masters degree in microelectronics. But the mathematics background gives me a strong foundation, and a mind that can grasp anything. I advise anyone who wants to get into the hard sciences to first get a maths degree if you can, and only then go into your chosen subject, be it physics, chemistry, computer science, electronics, whatever. Maths is the queen of science as Gauss said.
10**50 operations per second in what frame? Per watt, compute thread, cubic meter of cpu?
Per kilogram of mass, look up Bremermann's limit.
Holy crap. That ending comparing height to the observable universe really set things into perspective.
Yay, I have the exact same mantra as a student. I always try to code stuff to truly "get it". If I can explain a concept to a dumb machine, I must know it.
can you code quantum mechanics? didnt they say about quamtum mechanics , if you understand it you dont understand it
I think codes exist for it, but the problem is that it's too computationally intensive for more than a few particles. This is because you have to represent the state of the system as a "probability distribution" over every possible arrangement of the particles, to take account of entanglement. (Caveat: not really probability, since it's complex-valued.)
Sources:
ua-cam.com/video/w7398u8G588/v-deo.htmlm8s
en.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Particles_as_waves
Same here ... Now imagine ... what if Everyone on the Planet did this... instead of Conflict there'd be World Peace... haha
@@crabsynth3480 What? Why?
IBM has classical simulations of quantum computing. Someone had to code them. :)
As a programmer I do have say programming a (to me) complex bit of maths helps me understand so much more about it. I don't understand how this works, for years people have told me if you're programming you're doing maths, but to me personally it just feels so different, so much more friendly and specific.
But can it run crysis?
Sorry I had to. Imagine gaming in a dual blackhole Intel cpu.
Watching this (and Numberphile..) makes me feel so nerd and happy... I really enjoyed this video, thanks!
Why are there pink fluffy ears on the desk
Phil dressed as a sexy animal for Halloween.
Because there needs to be.
it is adressed at the end of the video. i read the comments early. hadn't even noticed them. he likes his props it seems. it does help translate harder to understand concepts into much simpler ones.
There's also what appears to be a tree branch leaning against the wall.
Harry Potter, The Death of Expertise, Maxwell's Demon, and pink bunny ears are the essential accoutrements of any working physicist. Especially those into metal.
I have been eating up videos from this channel. Not a clue for most of it but I love it, cheers :)
Buy some new strings
50-hours video on the Observer's Paradox, I'd GLADLY watch it!
Where’s the physicists’ video?
+Ambroisie oops link coming
+Computerphile ua-cam.com/video/mBdCE5hOexM/v-deo.html
I have so many questions for Prof. Moriarty! Tell him to get a twitter!
I don't know if anyone had already pointed this out, but 1 FLOP (floating point operation per second) isn't equivalent to 1 bit of operation per seconds used by the MIT paper. Floating point math are complex operation, and there are multiple types of floating point function. 1FLOPS on average roughly translate to 20,000 bits of operation per second according to some paper. So we are five orders of magnitude closer to the fundamental limit than this video suggested at the end.
There is also another issue to compare FLOPS with figure given by this paper. Notice FLOPS is per second, While the 10^50 figure isn't divided by time. They simply convert a kilogram of mass into pure energy and calculate how much calculation this much energy can perform. When we talk about Laptop level we usually associated this with its power envelop. Based on some rough calculation from their numbers for a 100W laptop the fundamental limit would be around 3x10^31 FLOPS.
I really love Prof Moriarty, thanks for putting up with what you had to put up with...I really appreciate it.
He brought a fair amount of it on himself, though.
Sigh, 2017 in a nutshell, being a decent person is now "bringing it on yourself".
Well the argument about sexual dimorphism was one thing... I was thinking more along the lines of him not acting like a decent person when he made repeated personal attacks instead of proper debate.
No, nobody's talking about that, you and moriarty seem to misunderstand things in exactly the same way.....
Weird how it's always the people with no education or expertise in science that accuse actual scientists of denying biology. I don't believe he doxed anybody, I think you're making that up.
- в видео говорится про фундаментальный предел скорости вычислений
- чувак рассказывает что он физик, но раньше программировал, "если я не могу запрогроммировать, я это не понимаю"
- что компы делают? вводим данные, производим вычисления, получаем данные
- обратимые (reversible) вычисления, аналогия с мячиком: теряется исходная информация, после вычислений и нельзя сказать что было на входе, так же как теряется энергия в механических системах
- 3:54 есть прямая зависимость между обратимостью вычислений и содержимым выходных данных
- 4:59 есть несколько вариантов как получить 0
- если юзать идеальный Вентиль Фредкина не будет потерь энергии при вычислении, т.к. потери появляются не при вычислениях, а при стирании информации
- 7:46 если юзать Вентиль Фредкина, то всё равно упрёшься в принцип неопределённости Гейзенберга
- 9:20 если нота звучит долго, можно легко понять какая у неё частота, если коротко - то сложно т.к. там много разных частот
- 11:15 (с большим упрощением) есть зависимоть между частотой и временем
- чем больше частота операций, тем больше энергии понадобится чтобы оперировать на ней
- 12:20 в научной работе говорится про ultimate laptop - по сути это компьютер это плазма дико высокой температуры, это не суперкомпьютеры сегодняшнего дня
- компы 2020 года = 1 ЭКЗАФЛОП (10^18). Компы 2030г = ЗЕТАФЛОТ (10^21). предел = 10^50
- сравнивает рост человека к размерам обозримой вселенной:
разница 26 порядков
- разница между скоростью вычислений 29 порядков (если сравнивать 10^21 к 10^50 ФЛОПС)
Man I love Prof. Moriarty. He's one of my favorite things about Sixty Symbols and his very rare appearances here are great.
Does Phil teach? From start to finish he was throurough and his analogies were perfect. I feel like I could actually make it through a university level physics class if he was teaching. Thank you for the content everyone.
The whistle was 1150 Hz
This video was a roller coaster from start to finish
I'm a simple man. I see a video with Prof. Moriarty, I press like.
13:32 there was a computer that achieved an exaFLOP of performance back in April of 2020 btw.
That was technically a network of computers, not a single machine. A single machine performing exaflops has yet to happen.
That was some quick thinking for the explanation of the fluffy ears. Nice
Another aspect, or perspective, on the lower limits of the computational scale in both space and time concerns being able to distinguish a Zero from a One. The smaller and faster your computing elements become, the more errors that are inevitably going to occur. Error correction techniques can mitigate these errors, but it soon gets to the point that the computation for error correction greatly exceeds the computation of the problem we seek to solve.
We are already seeing this in quantum computer designs. One recent design required 28 qubits to create a single stable and reliable qubit for computation. In other words, only 3.6% of the qubits are used for computation, and 96.4% of the qubits are needed just to make the computation work in the real world.
This is the "Law of Diminishing Returns", which is actually Murphy's Law writ large.
This episode felt all over the place.
I watched the first 9 minutes and he had not even begun to talk about computing limits.
Yes we know the speed of this episode.
It reminds me of one of my tangential rants which find an even cooler topic, and then never full explain the first thing I said.
"What if I do this? ... What if I do this!" *plays some death metal to explain quantum physics
The information density had a high fluctuation, (lecturers have to fill time and say things multiple ways to cover the various ways that people learn) but the information is still useful.
These videos are just amazing. Congratulations to the professors and to whoever had the idea of making them (the videos, not the professors. Though they deserve some credit too (the professor makers, I mean)). They surely make me wish I had studied at Nottingham.
Link to the paper: arxiv.org/pdf/quant-ph/9908043.pdf
Damn, Phil Moriarty is realy good at explaining advanced computerscience in a way that's comprehenseable. Love to see more of him!
I don't agree with phils political views either but would you leave the hate out of a video where he's not spouting his political views - its educational there's nothing to dislike about it
Swankity Dankity, didn't he sanctions a deplatform campaign, maybe wasn't him then I'm with you.
+TRBRY Why does it matter whether it was him? This video had nothing to do with it in either case.
Nnotm, if he is that kind of person I would find it odd if people complain that people do the same thing to him. People deplatforming are trying to make someone a social pariah.
Thanks to Professor Moriarty and the Computerphile team, it was a very interesting topic.
That accent...
Poor becomes purr
Speed becomes spade
Informeshon
Computeshon
I like it.
scottish people amirite
Amazing, the ending blew my mind completely. Also, as a guitarist in a metal band I really enjoyed the explanation haha
I'm a physicist and there's no card neither a link in the description. Guess I'm too early.
+Nikolay Yakimov ua-cam.com/video/mBdCE5hOexM/v-deo.html
Thanks
That Lloyd paper is a good read. Some notes / takeaways:
* The ultimate laptop is arbitrarily chosen to be 1 kg and 1 liter, which gives it 10^51 ops/second and 10^31 bits of memory. While processing power is simply proportional to mass (here it's 10^51 ops/second), the memory is more complicated, but for a fixed energy density it is an extrinsic property, i.e. double the energy and the volume will give double the memory.
* The mass & volume choice also sets the "degree of parallelization." For his choices the system is highly parallel (10^10 degree of parallelization). It would be very inefficient if given a serial task. If we want to do serial computation, we compress the computer which reduces its memory and parallelization. At the extreme, a black hole is fully serial, and for 1 kg it would store 4x10^16 bits while still achieving 10^51 ops/second.
* Then there's the matter of waste / energy consumption. Lloyd notes that error correction will require eliminating incorrect bits. Any removal of information to the environment comes at an energy cost (it's an irreversible operation). And there is a limit to how quickly we can do this (analogous to a computer's ability to cool itself) which suggests that the ultimate laptop can't handle more than 10^(-10) errors/operation. If it's at this limit it will also consume 4x10^26 watts of power. For systems that are less parallel, this is less of a problem (error rate threshold ~ 1/(degree of parallelization)).
Two comments:
* This applies to both quantum and classical computers (and presumably any yet-to-be-discovered type). However, the benefit of quantum algorithms is that they can change the cost formula, e.g. Grover's algorithm does a search which would take N operations classically in just sqrt(N) operations. Therefore, even if the number of operations that we can perform is physically limited, there is no clear limit on what we can actually do with a given number of operations; it changes anytime we discover new algorithm types that have lower costs.
* Personally, I'm skeptical about reversible computations. Intuitively if we perform some complex operation on a reversible computer, all the initial information must still be present at the end for reversibility to work. Yet so much of 'intelligence' follows this pattern: consume lots of data, filter it to find something interesting, and then do lots of processing (often expanding the data) on that specific finding. In a reversible computer, all that initial data that was filtered out has to remain in the computer, as what basically amounts to garbage bits. I could be wrong, but I think all emulations of irreversible algorithms with reversible gates result in this sort of garbage. At some point we will have to clear these bits to make room for more processing, and doing some comes at the standard kTln(2) energy cost per bit. So instead of just considering the removal of error bits, I would argue that if we're interested in performing any useful/intelligent calculation, there is going to be some rate of cleared bits per operation in order to complete the calculation and bring the computer back to its initial state (in which the memory is not full of garbage). It will be characteristic of the algorithms/computations being done, but probably much higher than the 10^(-10) error rate required for this ultimate laptop to operate (which would just mean we would have to settle for a somewhat less 'ultimate' version).
I thumb up every video that sends me to Wikipedia.
Likewise. I like to think and learn.
Have a total geekcrush on dr. Moriarty! :) This video touches some very interesting and far out points in a very nice and understandable way. Am a computer engineer, love the phisics in this video. Thank you for a great work you do at Computerphile and the Nottingham University.
But can it run Crysis ?
I like how Phil's acoustic guitar is tuned to C#. Both a man of metal and computing.
This probably needs the word RANT in the title...
This explanation is phenomenal
Assembly language. The only fun way to go.
Its about time you should do a video on Quantum Computation
Phil is such a great presenter. I don't care if he has different political opinions; when it comes to science, he's fantastic.
Totally agree.
Do you know him?
agreed.
What absurd political opinions?
Yeah, haven't heard of that either?
"If I can't code this I don't understand it", nice mantra to live by.
The limit is Crysis.
Love this topic so much, it's my life - trying to understand what kind of place I am living in within.
Approximately D above middle C, around 277Hz
Fantastic video! I really appreciate the link between the tennis ball system and the logic gate. Thanks!
Not exactly the most organized presenter on this channel.
9:45 I assume that's why you can play drums over any key track and tuning drums is not essential because the time is short and frequency range is wide. But then what about cymbles?
what's with all the negativity towards prof Moriarty?
Jan U His political leaning is the reason why that is
Jan U
Probably because he's a criminal mastermind, whose intelligence is only beaten by Sherlock Holmes.
Oh, you were talking about Professor Phil Moriarty? No idea.
I see none here.
rather simply he sided with some SJWs in one of those "questions for " videos
Jan U He dared to express left-wing opinions and frothing bigots caught wind of them?
Thank you so much for this video. Been interested in that topic for years, but never found any information on it.
Love seeing you back, as assbackwards as you behaved i still love you and what you do, you made a Christmas tree out of atoms for feck's sake!
Say, what?
Got that Paul Davies book when it came out during my physics a-level. Blew my mind.
I like Phil
Considering he doxxed a fellow researcher because they had differing political opinions....yeah, I don't like phil
I loved this presentation and the editing was fun too.
Aww he put 0-0, 1-0, 0-1, 1-1, so not ascending ^^ just kidding, it doesn't change anything for the explanation
Ah, but don't you want A to be the LSB and B to be the MSB? And you wouldn't put the B column before the A column, would you?
(Yeah, it annoyed me too.)
Never figured the Prof. is a programmer type; thanks for sharing!
All physicists are in a sense a programmers trying to reverse-engineered the universe.
3k likes vs 193 dislikes. Thunderf00t's fanboys always try and fail to brigade these videos.
Antenox Every video on youtube has alittle bit of dislikes. Get over yourself.
"The only way somebody could possibly dislike this video is if they're a fanboy!" - Not the words of a fanboy, according to you.
I mean, let's be honest - if Thunderf00t's fanbase *wanted* to brigade this with downvotes, I think they'd be able to to a bit better than 193. But, no, you think that every downvote ever has to ultimately trace back to him because that's not a ridiculous position at all.
Wait, why would someone want to brigade this video? I'm honestly curious.
skun406 because phil got into some political stuff and some people might not like him for that. Also people are allowed to dislike a video.
Am I the only person that would absolutely love me some "electrical engineeringphile"?