As an electrical engineer who has been designing / building analog and digital circuits for work since the late 80's, I'd have to say that the main reason for going binary boils down to power - at least with transistors. The MOSFETs in modern computers consume very little power when they are "ON" (1) or "OFF" (0). When ON the voltage is essentially at the positive power supply rail, and when off they are at ground - a voltage swing that is nearly 100% of the power supply rail. Any time they are between those 2 voltages, (when switching between states for example,) they consume many times more power. It is entirely possible to build transistor circuits with multiple logic levels, but you will suffer a large increase in power when doing so. However, switching between those states will be faster - turning fully ON (saturating) or OFF takes extra time. CRAY computers took advantage of ECL (Emitter-Coupled Logic) and their faster speed by using bipolar transistors that were always "on," i.e. conducting current. The 1 and 0 states were still differentiated by high and low voltages, but their swing was only about 16% of the power rail. This led to very fast computers for Cray, but also required those machines to use exotic and expensive cooling systems to keep running. Over time however, MOSFETs have gotten much smaller and faster - so much so that the ability to use MANY more of them for the same amount of power greatly overcomes the speed advantage of using ECL or other non-saturating logic. Arguably there is another low-power state MOSFETs could use - the Tri-state output - it is neither at the high or low voltage, but rather disconnected from the line. This however would take more transistors at each input and output to decode and encode the signal. And this system is already in use in computers - but it is used for allowing multiple devices to share the same bus (RAM, for example). As far as I know, no one has ever found it advantageous for performing logic or arithmetic as part of the data processing tasks.
as someone who understands this stuff why binary tho wouldn't it be more efficient to use more numbers based on different voltages. while it would uproot an entire system everything is built on wouldnt it create a faster computer while more complicated wouldnt it allow faster information travel. instead o 10010 youd just say 4 or 9 or even a 2 digit number that would relay the same information im just asking.
@@learn905he just explained that. He said you CAN do this stuff and it IS faster, but at a much greater power requirement. He also said that since mosfets have become smaller and faster, you’re really not saving much time or efficiency by switching to a non-binary system. It’s much more effective, at this time anyway, to remain in a binary system.
This felt like the first half of a video. Seems like he got to the point of saying "bi-quinary is better than binary" and then didn't deliver a punchline as to why we wound up with binary anyway.
Left as an exercise for the reader. Anyway, it's because modern bits have lower voltage differences, so it's not as feasible to do multiple voltage levels. You can also see that with reduction in efficiency in TLC/QLC memory in SSDs.
Yevgeniy Gorbachev Didn’t he give that as a reason for not using base 10? I thought he was saying Flowers reckoned he could make biquinary work...Perhaps he was just mistaken in that and quickly discovered as much
@@yevgeniygorbachev5152 Voltage has nothing to do with it. It's a matter of efficiently maximizing space because binary requires less transistors to count. 8 transistors can represent up to 256 in binary, because it uses exponential counting, decimal needs 10 transistors to get to 55. Binary Maximum Count: 2+4+8+16+32+64+128=256. Decimal Maximum Count: 1+2+3+4+5+6+7+8+9+10=55
@@yevgeniygorbachev5152 I think your misunderstanding on voltage is you're thinking of electricity circuits vs computing systems. In electrical circuits, 0 = zero power, but 1 can equal different kinds of voltage. In computing a lower voltage to a transistor = 0, any higher voltage = 1 and no voltage the computer is off. It's about counting in powers. The binary counting system is in base-2, which means you reach a new base every two digits or in mathematical terms a new "power" every 2 digits. In decimal, it is done base-10, so you need 10 digits to reach a new power.
This series of videos is truly great. I absolutely love listening to Professor Brailsford. I'm a unix and storage guy. I got into this because i loved the technology, but after a while it gets to be a bit of a drag. Watching these videos helped me reignite the passion that made me get into this field. I love it!
Did you notice that they used +5V and -5V? So why not 0V as well? Well, there was actually an experimental computer using balanced ternary (as this system is called) but it was more of emulating it than using it. Transistors are binary only and that's the main reason why we use binary nowadays. However as transistors are very close to hit their physical limits, new methods are developed and these methods (optical, Josephson junction) are in fact ternary. Donald Knuth (father of the analysis of algorithms and author of The Art of Computer Programming, among bazillion other achievements in computer science) predicted that balanced ternary would be the system of the future. (I hope this will be covered in a future Computerfile video.)
I would love to see a balanced ternary video. The logic behind it is just so nice and not all that more difficult really. Equally something I wish was experimented with was ternary or quaternary on hard drives. But it would matter much now since HDD are bssically on there way out. The only really larger density increase recently was SMR, which is horribly slow and only aimed at archival markets. SSD are much cheaper now and sloely catching up on price and density.
+Herve Shango For anyone too lazy to find a place to convert it, this is what it converts to: Did you notice that they used +5V and -5V? So why not 0V as well? Well, there was actually an experimental computer using balanced ternary (as this system is called) but it was more of emulating it than using it. Transistors are binary only and that's the main reason why we use binary nowadays. However as transistors are very close to hit their physical limits, new methods are developed and these methods (optical, Josephson junction) are in fact ternary. Donald Knuth (father of the analysis of algorithms and author of The Art of Computer Programming, among bazillion other achievements in computer science) predicted that balanced ternary would be the system of the future. (I hope this will be covered in a future Computerfile video.
+Jan Sten Adámek Not only did balanced ternary blow my mind, but Knuth also describes Fibonacci counting (very useful in information theory) and factorial counting (as a curiosity).
Another advantage of representing numbers in a binary format is it greatly simplifies error correction. Find the location of the error - and you automatically know the correct data - it's simply the inverse of the error.
What doesn't come out here is that the process of doing digital electronic mathematics with anything other than two states requires far more complex electronics.
+chrisofnottingham I agree. It appears to be a minimalization issue whereby you reduce the number of state-elements (0, 1, etc.) but also maximizing state-space. Clearly, as one UA-cam comment noted by suggesting a 1-based system of zeroes, the next best solution is zeroes and ones. Any extra state-elements added to the system have a progressive lower effect on the "usefulness" of their existence.
Well yes and no. The complexity from having more than two states will be figuring out how to make transistors relay more than two signals. The complexity of the circuit will overall stay the same.
+TJ.Lewis I'm not convinced the complexity does remain the same. Doing addition with more than two states pretty much turns into analogue computing plus multi level quantizing, which is very much more complex, or some kind of multi level logic that is processed using binary logic anyway. Transistors and valves are just naturally binary or continuous. So we can do binary or continuous mathematics fairly easily but it just isn't easy to impose another fixed number of states. Whereas by contrast, gears can in principle work naturally in any base. It is just the nature of the medium.
chrisofnottingham I do not think it will become completely analogue computing for n states if n > 1, only because it is not a continuous sinusoidal signal. For instance binary signals graphed out will have pits and hills because of having two possible states making rectangles. With three states it starts taking a shape of a triangle. To play devils advocate, at some point it will become somewhat sinusoidal, and graphing will have to be done using integration. In the case of complexity, the schematics of a processing chip in relations to logic gates; even tho it is inherently binary by nature does not mean the chip as a whole using n > 2 states cannot function. With logic gates, any signal not a 0 or 1 will be lost or ignored.
+TJ.Lewis By using ternary the logic gates would be much more complex. With binary a logic gate is a very simple circuit with just 2 transistors. With ternary those circuits would be much more complex than what you gain.
This was actually quite interesting. I had heard in my computing course that it was because of the voltage variation, that they couldn't guarantee that say, 4 volts would always actually be 4 volts and wouldn't drift as he says and become 5 or 3 and mess things up but they could easily guarantee it with binary by making it simply any voltage or no voltage. This video showed that there was a lot more to it than that though, I never really thought about the debate while the technology was still being developed, only about why we use it in terms of modern computing.
The end of that video reminds me of Jevons Paradox. The basic idea is that by increasing efficiency of something (in an attempt to conserve that resource) you can drop the price which stimulates demand to a degree that more than makes up for the increase in efficiency.
I want to know more about the non-binary attempts at making computers. There isn't much data online about Dekatron vacuum tubes and the Setun balanced ternary computer.
That "bi-quinary" system reminds me of those crazy "Diamond Edge 3D" cards that came out in the 90s - they rendered only in quadrilaterals and not triangles like modern graphics cards. Speaking of which, it'd be cool to see more videos related to GPUs! (Pardon the non sequitur-ish nature of my request.)
This guy is fantastic! The way he talks about concepts that are so foreign to most - like it's nothing - is great. I'd love to chat with him even though I'd be lost.
+derbuchholzer If there's anything I can do to ease the pain I'll try.... >Sean EDIT: Although, perhaps the fact that they're not involute contributes to the slip...
Ah, another lovely story by the brilliant Professor Brailsford. What voice he has, perfect for story telling. But maybe I'm biased by the fact that I'm American and any proper English accent sounds perfect for story telling.
+Franklin Cerpico What, pray, is a 'proper' English accent? (speaking as someone from the north east of England) -- And I agree, his voice is perfect for story-telling.
Gammel Prutte Well if I had to narrow it down, there's the accent which the Professor has, which I for lack of a better word label as 'proper' in order to distinguish it from a 'cockney' accent. Not to say a hint of 'cockney' isn't nice too, take Michael Cain for example.
In summary: decimal is only 3.3x more efficient than binary, while being significantly less reliable and harder to implement. It's simply not worth it.
+messianicrogue tl;dr: Binary is not necessary, and the alternative might even be easier to build for some cases. But it would be a power-hog lick no other, and be quite costly. Is that enough?
+109Rage Is it still a power hog with current technology? I just heard him say it was back in the day, and in the same exact vein as to why we use decimal in standard use, we still use binary.
+Matt Maloney Nowadays chip designers for arithmetic units will cheerfully go fully binary and accept the factor of 3.3 for the number of binary digits compared to decimal This is because each binary logic element is simple and is a low-power transistor or capacitor. But transistors weren't invented until the 1950s. Hence, in Tommy Flowers' day in 1943, each logic element was a power-hungry valve. So, if he'd "gone binary" in his counters as well as for his logic elements, the extra power consumption was non-negligible. On the other hand he couldn't "go fully decimal" because he couldn't keep 10 voltages stable and differentiated. Hence the "bi-qui" compromise. For more on this look inside Jack Copeland's "Colossus" book on page 123 (see EXTRA BITS video, linked off this one, for more about this book)
How about doing a video about modern vacuum tubes? They are used extensively in communication satellites (traveling wave tubes are pretty much the only amplifier that can reliably be used in a small while offering a small and compact package) and micro-tubes are being developed to be used in cellphones and other microwave communication gear such as Wifi access points so he might get tubes in his mobile sooner than he thinks ::)
+DOGMA1138 They don't really have that much to do with computers, so I don't think they fit here (or people watching computerphile would be interested/have physics under their belt). They are analog amplifiers.
You should look at the lore yourself, it even explicitly states it's an electronical device ("The RobCo Pip-Boy (Personal Information Processor) is an electronic device manufactured by RobCo Industries."). Lack of transistors doesn't make it mechanical.
I dont know if I would call it ironic. Although it got a lot of media attention that defect was on one samsung design and only happened to a very small number of phones. I think it would be ironic if samsung continuously produced things that caught on fire to the point of being known as the phones that always catch on fire.
The main reason is that most early electronic computers used relays. Relays only have 2 positions, off or on. Trinary computers have been built (either with bits -1,0,+1 or 0,1/2,1), but never worked well enough to out perform the vast experience we have on binary.
it always boils down to "on" or "off" of an atom. decimal or any other number systems are just interpretations. but if you are able to control the eigenstates of an electron, each atom can represent one trillion-cimal, meet the future computer.
Because at the most basic level, computers are just really really REALLY complicated implementation of simple circuits switching on and off. 1 = On 0 = Off. Saved you some time.
Why has this channel not done a video or even talked about memristors yet? And the computer technological significance of this equipment. I am sure many of your viewers have not even heard about memristors yet. ;-)
What about ternary (base 3)? Digital CMOS circuits have an additional state other than high (+3.3V) and low (0V), a high impedance state in which the output is connected neither to ground (0V) nor to +3.3V. That state is usually not used at all in digital circuits.
+elfboi523 The 3rd state is widely used when there are many devices connected to some wires and you need to release these wires for another device to put information on the wires. When the output is in high impedance state and nothing is connected to it any noise can trigger the input to read the wire as '0' or '1' so you can't use the 3rd state like '0' or '1' to represent any actual information.
+elfboi523 You are talking about tri-state logic. Back in the day you had to use this because there were no MOSFETs and you had to work with specified currents in bipolar transistors. The problem is that you use pull-up and pull-down resistors which meant you automatically needed more electric power(and speed). In CMOS(PMOS and MMOS combined) the resistors were taken out and the logic gates only consist of transistors. But you still use it in some bus systems.
Converting between different base number systems by hand is fun. A six sided die can be thought of as being base-6, but most dice are labeled wrong for this; the side with six dots should actually have no dots, to represent zero, because base-six can have only the values: 0, 1, 2, 3, 4, and 5, which is a total of 6 different values. The number "6" in base-10, would be represented by the number "10" in base-6. I want a die which is correctly labeled with 0 through 5. Using a coin toss takes much longer to get large numbers.
30 years ago I was talking about what I termed 'Linear Bits' which is whats covered here. all you need is a small pulse of, say 1v between bits that represents the maximum peak and the circuits can calibrate - like analogue video does. Ironically they might need to look at this for quantum computing and transmitting qubits, although they have 2^300 states which is more than the number of atoms in the universe.
Nice vid, but doesn't actually explain the actual question asked. The reason, short version, is that actually building the electronics to perform mathematical operations, becomes AMAZINGLY more complex, when you have to use more than 2 possible states. Everything else, like keeping voltages apart, can be solved by improving the technology involved, but the complexity of the circuits required, can't.
why not ternary computer {-1,0,1} instead of binary {0,1}, voltage polarity can be reversed to obtain -1, no need complex system to subtract, two's complement can be put aside, and hardware will exponentially be smaller and faster.
The real reason? Computers took a lot of space in the beginning some were entire buildings. You only need 8 transistors to count to 256 in binary, you need 10 to count to 55 in decimal. Binary Maximum Count: 2+4+8+16+32+64+128=256. Decimal Maximum Count: 1+2+3+4+5+6+7+8+9+10=55
Well you probably wold not get those 3 degrre burns anywhy, because you phone , let alone the batery to run the thing fir more than 2 secoonds, wokd not fin in your pocet if the phone was made with thermionic valves but tge comparison made me smile anyway, thanks for another great video
This didn't really explain why binary is used now. So there was an alternative base 10 system on the valve computers at Bletchley Park. Why aren't we still using that system then? What happened to it?
Transistor logic happenned. The big difference between transistor logic and valve logic is that the spike in power consumption is only when switching not all the time. Valves are power hogs all the time, transistors only when switching. So simply having a register to hold a value would draw power, so it made sense to have the smallest amount of circuitery. In transistor logic, you can have a lot of static circuitery which will not consume that much power when nothing happens on them. So representing numbers in binary allows for simpler circuits, even if there are more of them in the computer. It's the same thing with relays. Most relay computers also used binary representation (Zuse Z3 even in 1941) because in relays it's also the transition that is costly.
A circuit that can natively store and retrieve one of 10 states would be complicated to build and would include a lot of analog components that aren't efficient to miniaturize on ICs. Two state logic requires so few parts per bit that it makes sense to keep using it, at least with current technology.
This is basically a performance to cost issue. Because the cost of a simple 2 state circuit is so low (2 transistors), binary gives the most performance for the least cost. In other words, a 4-bit binary circuit is simpler than a single 10-state circuit.
Digital Communications use multiple voltage levels to pack more bits per clock cycle though. PAM (Pulse Amplitude Modulation) is used in Ethernet and QAM (Quadrature ditto) in WiFi.
But I do want my guitar amp made with valves :) interestingly it's rare to see the bias in the grid in guitar amps, it's mostly in the cathode and the signal is in the grid. I guess you get better amplification when the signal is the control voltage in the grid, you can make cathode have rather negative voltage.
Our Computer Architecture teacher says that trinary is better performance and size-wise compared to binary, but people started with binary and stayed with it, just like the qwerty keyboard over the dvorak, even through dvorak keyboard is better
NoriMori In an article it has been said that a person with same experience in dvorak, manages to type 20%+ faster than a qwerty keyboard, because of the key layouts(of course thats language-specific)
Captain Nevaran i'd personally require the source of this article and how the experiment was conducted, but yes i definitely wouldnt doubt that being the case. however we're never going away from QWERTY now.
@@NevaranUniverse What kind of trinary computer did your teacher suggest? Tri-state CMOS logic where the third state is a high impedance state where the output is disconnected from both ground and the power supply?
I just realized that I don't understand the physics of the "vacuum tube" transistor. I'll probably google it in a few days, but if you made a video about it, that would be awesome!
He didn't talk about transistors. He gave as historical reasons, but didn't tell why binary is STILL the industry standard. Is these reasons presented still aplicable today, even with transistors and newer technologies?
I have the greatest respect for Professor Brailsford, and I realize that the subject of electrical noise may be too complex to introduce on Computerphile. However it is difficult to understand how base 2 can be justified for use in computers without a discussion of electrical noise and logic levels that represent the 0/1 of digital computers. observerms
When you look at the first applications, the code breakers were dealing with morse or 5 bit radio telex code signals. A complex multi-level system to do base 10 arithmetic simply wasn't required. The messages were sent in extended alphabets, encoded by bits. I found it bizarre to suggest optimising computers for decimal, inflicting unnatural complexity on every operation.
Didn't understand why today that we are having power consumption problems we are still not moving towards either bi-quinary or decimal? Anyone understand that?
Ternary would be an even better option. About 64% more efficient than binary for storing a random data set. (Base e is perfectly efficient but not much better than 3 and much much harder to apply)
He just wanted to show of! But on a serious note...to store one bit of data you need one basic memory component, namely flip flop and becuase we need to store, say 99, we need atleast 7 such basic memory blocks. But if somehow we have designed a basic memory block capable of having 10 states for representing 10 binary digits instead of current capability of 2 for binary then we would need only 2 basic memory blocks for storing 99; one for each digit. 7/2 = 3.5 which is slightly greater than 3.22 cause we always have natural number for counting. So our storing capacity and in fact entire binary based digital system would be atleast 3.22 times more bulky than its decimal based counterpart
As mentioned by him later in the same video it is to calculate the maximum number of bits required to represent any 'n' digit number in its binary equivalent. Log 10 base 2 is 3.22. The professor mentions clearly that if you multiply this value 3.22 with the number of digits 'n' of your decimal number and them take the ceiling value you would get the maximum number of bits required for its representation. For Example if you have a 2 digit number say 35 or 48 or 99(the greatest 2 digit number) then you require 2×3.22=6.44 and take its ceiling ie 7. So 7 bits or a 7 bit length binary number can represent all 2 digit decimal numbers. Similarly for 3 digit numbers say 999 the max number of bits for binary representation is 3×3.22=9.66 and then take ceiling of 9.66 ie 10. So a 10 bit binary number can represent all 3 digit decimals. Same for 4 digits and so on. Binary numbers reduce complexity of logic but as you can see increase the circuitry by a lot. For a 2 digit decimal number you have to use 7 times the circuitry for doing the same.
You could argue that binary is not in fact used for DSL transmission where there can be as many as 15 bits per symbol. They use a combination of voltage levels and phase modulation to encode more bits into each symbol.
Storage is one thing, but what about the logic itself, which is inherently binary? That would all have to be converted to base 5 or base 10 or whatever. I feel like that would be incredibly difficult, but maybe I'm missing something.
Binary is easily worked at hexadecimal and octal. Hexadecimal is needs less digits 0xFF(255) vs 99 (0x63) and octal is fairly close 99(143 oct) 77 oct (63)
But now the question becomes this: if you're using multiple analog voltage separations to encode the values 0-4, how do you *store* those values? Currently we can store data because we can turn things on/off, or reorient magnets north/south, etc. How would you do that with 5 possible states, or worse, 10?
***** That works for large-scale machines, but how could you do it fully electronically so it can be miniaturized enough for laptops, tablets, phones, or even just desktop PCs?
+Joe Mills Multiples of 2 is what makes sense given the computer systems we have that uses the storage. If you build a 5 level system, you're of course free to use only 5 of the 8 levels in TLC flash (or build specific 5-level flash). What I replied to was the question of how to store multiple levels in memory, nothing about existing such memory not being binary based.
Michael Tempsch But the entire point of the video was "why don't we use something other than binary?" To say "you could do it by using parts of binary" is redundant.
+IceMetalPunk As stated, you don't have to use 'parts of binary.' You could design a specific 5-level flash memory - the tech is there, currently up to 8 levels. Given this question in your original post: "Currently we can store data because we can turn things on/off, or reorient magnets north/south, etc. How would you do that with 5 possible states, or worse, 10?", I pointed to a current technique that actually does this. I fail to see how the basic technique must be disqualified because it in current implementations uses a number of levels that is a power of 2.
Let's say we build a hexadecimal (or any other base>2) computer. For every value we then have to differentiate between 16 distinct 'positions'. If we can build technology that precise then we can also (in most cases) build technology where those hexadecimal numbers are replaced with 4 binary bits 1/4th the 'size' each (or whatever measurement is relevant). Since bits are simpler and easier to read they are readable even at smaller sizes. Since a lot of operations (like logic gates) are naturally done with binary it is easier to build a binary computer. And that is why binary is the favoured number system for electric computers.
There were some attempts at ternary computers in the Soviet Union, but circumstances of the Cold War led them to be scrapped in favour of stealing binary systems from the West to save resources and research time.
I wonder... with our current manufacturing and fabricating abilities, is making a decimal computer system still THAT inefficient compared to binary anymore? I mean yeah it might be a little bit, but considering how small we can make things, how efficient on power they are, it has to be somewhat plausible. I'd love to see that as an exploration of our computing abilities to see if perhaps there is a better way to, well, computer, from the ground up.
Babbage should have invented the CNC mill. The textile industry had already began to embrace that kind of automation, this would not have been a foreign concept to him.
This needed an example of 0-4 as far as voltage goes. would they still be using +5 -5 and just detecting outputs @ 5,3,0,-3,+5? MUCH more info needed on how they decided it was "stable" i can see computers splitting logical commands from calculator commands but.. i'm really concerned with the latency of transferring data between base 5 and base 2. Also concerned that data storage wasn't mentioned. Please do a follow up.
In Soviet Russia they made trinary computer, it was much more efficient compared to binary. But unfortunately because money issues they didn't make it better, they started copying west. But the interesting 1 and 0 was they used 1, 0 and -1, which made negative numbers easier to express than in binary. also some calculating stuff was easier
There is *so* much Soviet tech with incredible potentials that we just straight up lost because of the pressures and constraints of the Cold War. I'm not mad the Soviets lost. I'm mad at the science that never happened because they were forced to fight a war instead of spending time and resources on scientific pursuits for the sake of scientific advancement.
Well, we've had analog computers before where precise voltages represent numbers and various circuits combine those voltages in different ways. At some point you have to take a measurement and record a number which will be accurate to whatever precision the machine allows. Logic would work along those lines, but the advantage of binary is the complete lack of ambiguity and the ability to determine the state of any bit with a single voltage threshold.
Why not mention anything else, like balanced ternary? The question "why binary" is not answered in this vid. It's seems more like a trailer for a vid that would actually answer the question, with a couple interesting tangental facts tossed in.
Probably should mention Binary Coded Decimal. That's where you use 4 bits to encode each decimal digit, and just don't use 0b1010, 0b1011, 0b1100, 0b1101, 0b1110, 0b1111.
As an electrical engineer who has been designing / building analog and digital circuits for work since the late 80's, I'd have to say that the main reason for going binary boils down to power - at least with transistors. The MOSFETs in modern computers consume very little power when they are "ON" (1) or "OFF" (0). When ON the voltage is essentially at the positive power supply rail, and when off they are at ground - a voltage swing that is nearly 100% of the power supply rail. Any time they are between those 2 voltages, (when switching between states for example,) they consume many times more power.
It is entirely possible to build transistor circuits with multiple logic levels, but you will suffer a large increase in power when doing so. However, switching between those states will be faster - turning fully ON (saturating) or OFF takes extra time. CRAY computers took advantage of ECL (Emitter-Coupled Logic) and their faster speed by using bipolar transistors that were always "on," i.e. conducting current. The 1 and 0 states were still differentiated by high and low voltages, but their swing was only about 16% of the power rail. This led to very fast computers for Cray, but also required those machines to use exotic and expensive cooling systems to keep running.
Over time however, MOSFETs have gotten much smaller and faster - so much so that the ability to use MANY more of them for the same amount of power greatly overcomes the speed advantage of using ECL or other non-saturating logic.
Arguably there is another low-power state MOSFETs could use - the Tri-state output - it is neither at the high or low voltage, but rather disconnected from the line. This however would take more transistors at each input and output to decode and encode the signal. And this system is already in use in computers - but it is used for allowing multiple devices to share the same bus (RAM, for example). As far as I know, no one has ever found it advantageous for performing logic or arithmetic as part of the data processing tasks.
as someone who understands this stuff why binary tho wouldn't it be more efficient to use more numbers based on different voltages. while it would uproot an entire system everything is built on wouldnt it create a faster computer while more complicated wouldnt it allow faster information travel. instead o 10010 youd just say 4 or 9 or even a 2 digit number that would relay the same information im just asking.
@@learn905he just explained that. He said you CAN do this stuff and it IS faster, but at a much greater power requirement. He also said that since mosfets have become smaller and faster, you’re really not saving much time or efficiency by switching to a non-binary system. It’s much more effective, at this time anyway, to remain in a binary system.
I have so much respect for this man, absolutely fascinating!
He is a lovely man too! He taught me at university!
@@mrs-m He's a pleasure to listen to and quite informative.
This felt like the first half of a video. Seems like he got to the point of saying "bi-quinary is better than binary" and then didn't deliver a punchline as to why we wound up with binary anyway.
Left as an exercise for the reader. Anyway, it's because modern bits have lower voltage differences, so it's not as feasible to do multiple voltage levels. You can also see that with reduction in efficiency in TLC/QLC memory in SSDs.
Yevgeniy Gorbachev Didn’t he give that as a reason for not using base 10? I thought he was saying Flowers reckoned he could make biquinary work...Perhaps he was just mistaken in that and quickly discovered as much
@@yevgeniygorbachev5152 Voltage has nothing to do with it. It's a matter of efficiently maximizing space because binary requires less transistors to count. 8 transistors can represent up to 256 in binary, because it uses exponential counting, decimal needs 10 transistors to get to 55.
Binary Maximum Count: 2+4+8+16+32+64+128=256.
Decimal Maximum Count: 1+2+3+4+5+6+7+8+9+10=55
@@mikeef747 Why are you multiplying by two in one expression and adding one in the other? I was under the impression that decimal was 1 + 10 + ...
@@yevgeniygorbachev5152 I think your misunderstanding on voltage is you're thinking of electricity circuits vs computing systems. In electrical circuits, 0 = zero power, but 1 can equal different kinds of voltage. In computing a lower voltage to a transistor = 0, any higher voltage = 1 and no voltage the computer is off.
It's about counting in powers. The binary counting system is in base-2, which means you reach a new base every two digits or in mathematical terms a new "power" every 2 digits. In decimal, it is done base-10, so you need 10 digits to reach a new power.
This man is a gifted communicator. His life's work has definitely advanced mankind. Respect +1
This series of videos is truly great.
I absolutely love listening to Professor Brailsford.
I'm a unix and storage guy. I got into this because i loved the technology, but after a while it gets to be a bit of a drag.
Watching these videos helped me reignite the passion that made me get into this field.
I love it!
Did you notice that they used +5V and -5V? So why not 0V as well? Well, there was actually an experimental computer using balanced ternary (as this system is called) but it was more of emulating it than using it. Transistors are binary only and that's the main reason why we use binary nowadays. However as transistors are very close to hit their physical limits, new methods are developed and these methods (optical, Josephson junction) are in fact ternary. Donald Knuth (father of the analysis of algorithms and author of The Art of Computer Programming, among bazillion other achievements in computer science) predicted that balanced ternary would be the system of the future.
(I hope this will be covered in a future Computerfile video.)
I would love to see a balanced ternary video. The logic behind it is just so nice and not all that more difficult really.
Equally something I wish was experimented with was ternary or quaternary on hard drives. But it would matter much now since HDD are bssically on there way out.
The only really larger density increase recently was SMR, which is horribly slow and only aimed at archival markets.
SSD are much cheaper now and sloely catching up on price and density.
+Jan Sten Adámek 01000100 01101001 01100100 00100000 01111001 01101111 01110101 00100000 01101110 01101111 01110100 01101001 01100011 01100101 00100000 01110100 01101000 01100001 01110100 00100000 01110100 01101000 01100101 01111001 00100000 01110101 01110011 01100101 01100100 00100000 00101011 00110101 01010110 00100000 01100001 01101110 01100100 00100000 00101101 00110101 01010110 00111111 00100000 01010011 01101111 00100000 01110111 01101000 01111001 00100000 01101110 01101111 01110100 00100000 00110000 01010110 00100000 01100001 01110011 00100000 01110111 01100101 01101100 01101100 00111111 00100000 01010111 01100101 01101100 01101100 00101100 00100000 01110100 01101000 01100101 01110010 01100101 00100000 01110111 01100001 01110011 00100000 01100001 01100011 01110100 01110101 01100001 01101100 01101100 01111001 00100000 01100001 01101110 00100000 01100101 01111000 01110000 01100101 01110010 01101001 01101101 01100101 01101110 01110100 01100001 01101100 00100000 01100011 01101111 01101101 01110000 01110101 01110100 01100101 01110010 00100000 01110101 01110011 01101001 01101110 01100111 00100000 01100010 01100001 01101100 01100001 01101110 01100011 01100101 01100100 00100000 01110100 01100101 01110010 01101110 01100001 01110010 01111001 00100000 00101000 01100001 01110011 00100000 01110100 01101000 01101001 01110011 00100000 01110011 01111001 01110011 01110100 01100101 01101101 00100000 01101001 01110011 00100000 01100011 01100001 01101100 01101100 01100101 01100100 00101001 00100000 01100010 01110101 01110100 00100000 01101001 01110100 00100000 01110111 01100001 01110011 00100000 01101101 01101111 01110010 01100101 00100000 01101111 01100110 00100000 01100101 01101101 01110101 01101100 01100001 01110100 01101001 01101110 01100111 00100000 01101001 01110100 00100000 01110100 01101000 01100001 01101110 00100000 01110101 01110011 01101001 01101110 01100111 00100000 01101001 01110100 00101110 00100000 01010100 01110010 01100001 01101110 01110011 01101001 01110011 01110100 01101111 01110010 01110011 00100000 01100001 01110010 01100101 00100000 01100010 01101001 01101110 01100001 01110010 01111001 00100000 01101111 01101110 01101100 01111001 00100000 01100001 01101110 01100100 00100000 01110100 01101000 01100001 01110100 00100111 01110011 00100000 01110100 01101000 01100101 00100000 01101101 01100001 01101001 01101110 00100000 01110010 01100101 01100001 01110011 01101111 01101110 00100000 01110111 01101000 01111001 00100000 01110111 01100101 00100000 01110101 01110011 01100101 00100000 01100010 01101001 01101110 01100001 01110010 01111001 00100000 01101110 01101111 01110111 01100001 01100100 01100001 01111001 01110011 00101110 00100000 01001000 01101111 01110111 01100101 01110110 01100101 01110010 00100000 01100001 01110011 00100000 01110100 01110010 01100001 01101110 01110011 01101001 01110011 01110100 01101111 01110010 01110011 00100000 01100001 01110010 01100101 00100000 01110110 01100101 01110010 01111001 00100000 01100011 01101100 01101111 01110011 01100101 00100000 01110100 01101111 00100000 01101000 01101001 01110100 00100000 01110100 01101000 01100101 01101001 01110010 00100000 01110000 01101000 01111001 01110011 01101001 01100011 01100001 01101100 00100000 01101100 01101001 01101101 01101001 01110100 01110011 00101100 00100000 01101110 01100101 01110111 00100000 01101101 01100101 01110100 01101000 01101111 01100100 01110011 00100000 01100001 01110010 01100101 00100000 01100100 01100101 01110110 01100101 01101100 01101111 01110000 01100101 01100100 00100000 01100001 01101110 01100100 00100000 01110100 01101000 01100101 01110011 01100101 00100000 01101101 01100101 01110100 01101000 01101111 01100100 01110011 00100000 00101000 01101111 01110000 01110100 01101001 01100011 01100001 01101100 00101100 00100000 01001010 01101111 01110011 01100101 01110000 01101000 01110011 01101111 01101110 00100000 01101010 01110101 01101110 01100011 01110100 01101001 01101111 01101110 00101001 00100000 01100001 01110010 01100101 00100000 01101001 01101110 00100000 01100110 01100001 01100011 01110100 00100000 01110100 01100101 01110010 01101110 01100001 01110010 01111001 00101110 00100000 01000100 01101111 01101110 01100001 01101100 01100100 00100000 01001011 01101110 01110101 01110100 01101000 00100000 00101000 01100110 01100001 01110100 01101000 01100101 01110010 00100000 01101111 01100110 00100000 01110100 01101000 01100101 00100000 01100001 01101110 01100001 01101100 01111001 01110011 01101001 01110011 00100000 01101111 01100110 00100000 01100001 01101100 01100111 01101111 01110010 01101001 01110100 01101000 01101101 01110011 00100000 01100001 01101110 01100100 00100000 01100001 01110101 01110100 01101000 01101111 01110010 00100000 01101111 01100110 00100000 01010100 01101000 01100101 00100000 01000001 01110010 01110100 00100000 01101111 01100110 00100000 01000011 01101111 01101101 01110000 01110101 01110100 01100101 01110010 00100000 01010000 01110010 01101111 01100111 01110010 01100001 01101101 01101101 01101001 01101110 01100111 00101100 00100000 01100001 01101101 01101111 01101110 01100111 00100000 01100010 01100001 01111010 01101001 01101100 01101100 01101001 01101111 01101110 00100000 01101111 01110100 01101000 01100101 01110010 00100000 01100001 01100011 01101000 01101001 01100101 01110110 01100101 01101101 01100101 01101110 01110100 01110011 00100000 01101001 01101110 00100000 01100011 01101111 01101101 01110000 01110101 01110100 01100101 01110010 00100000 01110011 01100011 01101001 01100101 01101110 01100011 01100101 00101001 00100000 01110000 01110010 01100101 01100100 01101001 01100011 01110100 01100101 01100100 00100000 01110100 01101000 01100001 01110100 00100000 01100010 01100001 01101100 01100001 01101110 01100011 01100101 01100100 00100000 01110100 01100101 01110010 01101110 01100001 01110010 01111001 00100000 01110111 01101111 01110101 01101100 01100100 00100000 01100010 01100101 00100000 01110100 01101000 01100101 00100000 01110011 01111001 01110011 01110100 01100101 01101101 00100000 01101111 01100110 00100000 01110100 01101000 01100101 00100000 01100110 01110101 01110100 01110101 01110010 01100101 00101110 00001101 00001010 00001101 00001010 00101000 01001001 00100000 01101000 01101111 01110000 01100101 00100000 01110100 01101000 01101001 01110011 00100000 01110111 01101001 01101100 01101100 00100000 01100010 01100101 00100000 01100011 01101111 01110110 01100101 01110010 01100101 01100100 00100000 01101001 01101110 00100000 01100001 00100000 01100110 01110101 01110100 01110101 01110010 01100101 00100000 01000011 01101111 01101101 01110000 01110101 01110100 01100101 01110010 01100110 01101001 01101100 01100101 00100000 01110110 01101001 01100100 01100101 01101111 00101110
+Jan Sten Adámek Why not use 0V? Easy: fault detection. Especially in the early days, these valves burned out a lot.
+Herve Shango For anyone too lazy to find a place to convert it, this is what it converts to:
Did you notice that they used +5V and -5V? So why not 0V as well? Well, there was actually an experimental computer using balanced ternary (as this system is called) but it was more of emulating it than using it. Transistors are binary only and that's the main reason why we use binary nowadays. However as transistors are very close to hit their physical limits, new methods are developed and these methods (optical, Josephson junction) are in fact ternary. Donald Knuth (father of the analysis of algorithms and author of The Art of Computer Programming, among bazillion other achievements in computer science) predicted that balanced ternary would be the system of the future.
(I hope this will be covered in a future Computerfile video.
+Jan Sten Adámek Not only did balanced ternary blow my mind, but Knuth also describes Fibonacci counting (very useful in information theory) and factorial counting (as a curiosity).
Another advantage of representing numbers in a binary format is it greatly simplifies error correction. Find the location of the error - and you automatically know the correct data - it's simply the inverse of the error.
Hamming codes you mean?
What doesn't come out here is that the process of doing digital electronic mathematics with anything other than two states requires far more complex electronics.
+chrisofnottingham I agree. It appears to be a minimalization issue whereby you reduce the number of state-elements (0, 1, etc.) but also maximizing state-space. Clearly, as one UA-cam comment noted by suggesting a 1-based system of zeroes, the next best solution is zeroes and ones. Any extra state-elements added to the system have a progressive lower effect on the "usefulness" of their existence.
Well yes and no. The complexity from having more than two states will be figuring out how to make transistors relay more than two signals. The complexity of the circuit will overall stay the same.
+TJ.Lewis I'm not convinced the complexity does remain the same. Doing addition with more than two states pretty much turns into analogue computing plus multi level quantizing, which is very much more complex, or some kind of multi level logic that is processed using binary logic anyway.
Transistors and valves are just naturally binary or continuous. So we can do binary or continuous mathematics fairly easily but it just isn't easy to impose another fixed number of states. Whereas by contrast, gears can in principle work naturally in any base. It is just the nature of the medium.
chrisofnottingham I do not think it will become completely analogue computing for n states if n > 1, only because it is not a continuous sinusoidal signal. For instance binary signals graphed out will have pits and hills because of having two possible states making rectangles. With three states it starts taking a shape of a triangle. To play devils advocate, at some point it will become somewhat sinusoidal, and graphing will have to be done using integration. In the case of complexity, the schematics of a processing chip in relations to logic gates; even tho it is inherently binary by nature does not mean the chip as a whole using n > 2 states cannot function. With logic gates, any signal not a 0 or 1 will be lost or ignored.
+TJ.Lewis By using ternary the logic gates would be much more complex. With binary a logic gate is a very simple circuit with just 2 transistors. With ternary those circuits would be much more complex than what you gain.
This was actually quite interesting. I had heard in my computing course that it was because of the voltage variation, that they couldn't guarantee that say, 4 volts would always actually be 4 volts and wouldn't drift as he says and become 5 or 3 and mess things up but they could easily guarantee it with binary by making it simply any voltage or no voltage. This video showed that there was a lot more to it than that though, I never really thought about the debate while the technology was still being developed, only about why we use it in terms of modern computing.
Every idiot can count to one
-Bob Widlar
+AvZ „Astatine“ NaV Every idiot can count to 10.
tomlxyz Not with a single bit, you can't
AvZ NaV If you start with 1 you can.
+AvZ “Astatine” NaV "A" in hexadecimal, tadah.
You asked for it
WXVwLCBJIHdpbg==
The end of that video reminds me of Jevons Paradox. The basic idea is that by increasing efficiency of something (in an attempt to conserve that resource) you can drop the price which stimulates demand to a degree that more than makes up for the increase in efficiency.
Professor Brailsford is awesome! Thank you for introducing him; I only wish I'd had more profs like him.
I want to know more about the non-binary attempts at making computers. There isn't much data online about Dekatron vacuum tubes and the Setun balanced ternary computer.
You can search about Analog Computing, it's a non binary attempt of computing. Which is more efficient but not scalable.
That "bi-quinary" system reminds me of those crazy "Diamond Edge 3D" cards that came out in the 90s - they rendered only in quadrilaterals and not triangles like modern graphics cards.
Speaking of which, it'd be cool to see more videos related to GPUs! (Pardon the non sequitur-ish nature of my request.)
This guy is fantastic! The way he talks about concepts that are so foreign to most - like it's nothing - is great. I'd love to chat with him even though I'd be lost.
A better question might be, why when your base building block consists of transistors, would you want to use any other base?
Balanced ternary.
Interesting video.
But the lack of using involute gears in that animation at 0:53 was a bit painful to watch.
+derbuchholzer If there's anything I can do to ease the pain I'll try.... >Sean
EDIT: Although, perhaps the fact that they're not involute contributes to the slip...
+Computerphile _Anything?_ ( ͡° ͜ʖ ͡°)
+derbuchholzer wow ur so clever plz marry me, bst cmmnt on utube, 10/10, point score > 9000.
+derbuchholzer can you explain to those of us who are wondering why you are getting likes?
+derbuchholzer does it grind your gears?
Ah, another lovely story by the brilliant Professor Brailsford. What voice he has, perfect for story telling. But maybe I'm biased by the fact that I'm American and any proper English accent sounds perfect for story telling.
+Franklin Cerpico
What, pray, is a 'proper' English accent? (speaking as someone from the north east of England) -- And I agree, his voice is perfect for story-telling.
Gammel Prutte Well if I had to narrow it down, there's the accent which the Professor has, which I for lack of a better word label as 'proper' in order to distinguish it from a 'cockney' accent. Not to say a hint of 'cockney' isn't nice too, take Michael Cain for example.
i only understand like 3% of what this man is saying but i would watch him explain anything
In summary: decimal is only 3.3x more efficient than binary, while being significantly less reliable and harder to implement. It's simply not worth it.
No idea what was being said for 90% of this video, I understood every word being said, I just don't comprehend any of it.
+messianicrogue
tl;dr: Binary is not necessary, and the alternative might even be easier to build for some cases. But it would be a power-hog lick no other, and be quite costly.
Is that enough?
+109Rage Great summary, thanks!
+messianicrogue Mine was the opposite. His voice is too raspy for me to hear what he's saying.
+messianicrogue
Thats computerphile and numberphile for you
+109Rage Is it still a power hog with current technology? I just heard him say it was back in the day, and in the same exact vein as to why we use decimal in standard use, we still use binary.
Professor Brailsford is a treasure. I enjoy his videos so much!
As usual, this is wonderfully insightful. But, does it definitively answer the question, "Why Binary?" -- I must be missing something.
+Matt Maloney
Nowadays chip designers for arithmetic units will cheerfully go fully binary and accept the factor of 3.3 for the number of binary digits compared to decimal This is because each binary logic element is simple and is a low-power transistor or capacitor. But transistors weren't invented until the 1950s. Hence, in Tommy Flowers' day in 1943, each logic element was a power-hungry valve. So, if he'd "gone binary" in his counters as well as for his logic elements, the extra power consumption was non-negligible. On the other hand he couldn't "go fully decimal" because he couldn't keep 10 voltages stable and differentiated. Hence the "bi-qui" compromise. For more on this look inside Jack Copeland's "Colossus" book on page 123 (see EXTRA BITS video, linked off this one, for more about this book)
How about doing a video about modern vacuum tubes? They are used extensively in communication satellites (traveling wave tubes are pretty much the only amplifier that can reliably be used in a small while offering a small and compact package) and micro-tubes are being developed to be used in cellphones and other microwave communication gear such as Wifi access points so he might get tubes in his mobile sooner than he thinks ::)
+DOGMA1138
They don't really have that much to do with computers, so I don't think they fit here (or people watching computerphile would be interested/have physics under their belt). They are analog amplifiers.
Imagine how big and power hungry a non-electronic smartphone-equivalent would be.
+C0deH0wler Probably something like this : vignette3.wikia.nocookie.net/fallout/images/7/76/Pip-Boy_3000.jpg/revision/latest?cb=20110712154420
+András Bíró Well that's still electronic.
*****
Electronic**. You don't call your speakers electromechanical devices because they have an analog knob that can turn, do you?
You should look at the lore yourself, it even explicitly states it's an electronical device ("The RobCo Pip-Boy (Personal Information Processor) is an electronic device manufactured by RobCo Industries."). Lack of transistors doesn't make it mechanical.
You don't know what electromechanical devices are.
This is getting me deeper into the rabbit hole of programming.
Professor Brailsford demonstrating a Samsung phone while talking about not wanting third degree burns from his computer is suddenly rather ironic.
that's not binary's fault LOL
feanenatreides xD
I dont know if I would call it ironic. Although it got a lot of media attention that defect was on one samsung design and only happened to a very small number of phones. I think it would be ironic if samsung continuously produced things that caught on fire to the point of being known as the phones that always catch on fire.
+Cole Knapek And it wasn't even a Samsung phone, it was a vendor-supplied battery.
feanenatreides lol.
I could listen to this man the whole day… And since I love to binge-watch that, I think I kinda do that.
Who else came expecting him to talk about ternary computers?
The Note 7 gives you 3rd degree burns even without those valves ;)
The main reason is that most early electronic computers used relays. Relays only have 2 positions, off or on.
Trinary computers have been built (either with bits -1,0,+1 or 0,1/2,1), but never worked well enough to out perform the vast experience we have on binary.
it always boils down to "on" or "off" of an atom. decimal or any other number systems are just interpretations. but if you are able to control the eigenstates of an electron, each atom can represent one trillion-cimal, meet the future computer.
Because at the most basic level, computers are just really really REALLY complicated implementation of simple circuits switching on and off. 1 = On 0 = Off. Saved you some time.
Why has this channel not done a video or even talked about memristors yet? And the computer technological significance of this equipment. I am sure many of your viewers have not even heard about memristors yet. ;-)
What about ternary (base 3)? Digital CMOS circuits have an additional state other than high (+3.3V) and low (0V), a high impedance state in which the output is connected neither to ground (0V) nor to +3.3V. That state is usually not used at all in digital circuits.
+elfboi523 The 3rd state is widely used when there are many devices connected to some wires and you need to release these wires for another device to put information on the wires. When the output is in high impedance state and nothing is connected to it any noise can trigger the input to read the wire as '0' or '1' so you can't use the 3rd state like '0' or '1' to represent any actual information.
+elfboi523 You are talking about tri-state logic. Back in the day you had to use this because there were no MOSFETs and you had to work with specified currents in bipolar transistors. The problem is that you use pull-up and pull-down resistors which meant you automatically needed more electric power(and speed).
In CMOS(PMOS and MMOS combined) the resistors were taken out and the logic gates only consist of transistors.
But you still use it in some bus systems.
Computerphile. Would you recommend that colossus book? It seems interesting.
+Torrey Braman Professor Brailsford would heartily recommend that book, in fact see the 'extra bits' video for his personal recommendation! >Sean
Awesome! I think ill look into it!
Mobile device with thermionic valves = Pip Boy
Tommy Flowers; One of the pioneers of computing. Kudos.
Converting between different base number systems by hand is fun. A six sided die can be thought of as being base-6, but most dice are labeled wrong for this; the side with six dots should actually have no dots, to represent zero, because base-six can have only the values: 0, 1, 2, 3, 4, and 5, which is a total of 6 different values. The number "6" in base-10, would be represented by the number "10" in base-6. I want a die which is correctly labeled with 0 through 5. Using a coin toss takes much longer to get large numbers.
30 years ago I was talking about what I termed 'Linear Bits' which is whats covered here. all you need is a small pulse of, say 1v between bits that represents the maximum peak and the circuits can calibrate - like analogue video does. Ironically they might need to look at this for quantum computing and transmitting qubits, although they have 2^300 states which is more than the number of atoms in the universe.
Nice vid, but doesn't actually explain the actual question asked.
The reason, short version, is that actually building the electronics to perform mathematical operations, becomes AMAZINGLY more complex, when you have to use more than 2 possible states. Everything else, like keeping voltages apart, can be solved by improving the technology involved, but the complexity of the circuits required, can't.
why not ternary computer {-1,0,1} instead of binary {0,1}, voltage polarity can be reversed to obtain -1, no need complex system to subtract, two's complement can be put aside, and hardware will exponentially be smaller and faster.
The real reason? Computers took a lot of space in the beginning some were entire buildings. You only need 8 transistors to count to 256 in binary, you need 10 to count to 55 in decimal.
Binary Maximum Count: 2+4+8+16+32+64+128=256.
Decimal Maximum Count: 1+2+3+4+5+6+7+8+9+10=55
Well you probably wold not get those 3 degrre burns anywhy, because you phone , let alone the batery to run the thing fir more than 2 secoonds, wokd not fin in your pocet if the phone was made with thermionic valves but tge comparison made me smile anyway, thanks for another great video
This professor is fascinating
I understand the issue with decimal, but why not hexadecimal, or heck, even base 4?
They're powers of 2.
This didn't really explain why binary is used now. So there was an alternative base 10 system on the valve computers at Bletchley Park. Why aren't we still using that system then? What happened to it?
Transistor logic happenned. The big difference between transistor logic and valve logic is that the spike in power consumption is only when switching not all the time. Valves are power hogs all the time, transistors only when switching. So simply having a register to hold a value would draw power, so it made sense to have the smallest amount of circuitery. In transistor logic, you can have a lot of static circuitery which will not consume that much power when nothing happens on them. So representing numbers in binary allows for simpler circuits, even if there are more of them in the computer. It's the same thing with relays. Most relay computers also used binary representation (Zuse Z3 even in 1941) because in relays it's also the transition that is costly.
We figured out that physics doesn't have ten fingers.
A circuit that can natively store and retrieve one of 10 states would be complicated to build and would include a lot of analog components that aren't efficient to miniaturize on ICs. Two state logic requires so few parts per bit that it makes sense to keep using it, at least with current technology.
Why not the balanced ternary. It's the most efficient integer base system i.e. the closest one to the base e.
one of the best story tellers ever
This is basically a performance to cost issue. Because the cost of a simple 2 state circuit is so low (2 transistors), binary gives the most performance for the least cost. In other words, a 4-bit binary circuit is simpler than a single 10-state circuit.
11 x 17" greenbar paper! Nostalgia washes over me.
Mother nature uses base4 for the DNA code. Why not use that?
I wish this man was my Grandfather.
I would talk to him all the time.
I think somebody should print a t-shirt with that logarithm and send it to Professor Brailsford!
Digital Communications use multiple voltage levels to pack more bits per clock cycle though. PAM (Pulse Amplitude Modulation) is used in Ethernet and QAM (Quadrature ditto) in WiFi.
Technically, the most effective base for arithmetic is e, but that's silly. 3 would still be better than two though
I have heard that before! Why is that and can you give a source so i can read more about it please?
But I do want my guitar amp made with valves :) interestingly it's rare to see the bias in the grid in guitar amps, it's mostly in the cathode and the signal is in the grid. I guess you get better amplification when the signal is the control voltage in the grid, you can make cathode have rather negative voltage.
Our Computer Architecture teacher says that trinary is better performance and size-wise compared to binary, but people started with binary and stayed with it, just like the qwerty keyboard over the dvorak, even through dvorak keyboard is better
+Captain Nevaran
and just like humans began learning in decimals and we probalby wont change even if other bases are easier to calculate with.
+Captain Nevaran Dvorak has not been conclusively proven to be better.
NoriMori
In an article it has been said that a person with same experience in dvorak, manages to type 20%+ faster than a qwerty keyboard, because of the key layouts(of course thats language-specific)
Captain Nevaran
i'd personally require the source of this article and how the experiment was conducted, but yes i definitely wouldnt doubt that being the case. however we're never going away from QWERTY now.
@@NevaranUniverse What kind of trinary computer did your teacher suggest? Tri-state CMOS logic where the third state is a high impedance state where the output is disconnected from both ground and the power supply?
It would be interesting to see a modern base ten computer.
I just realized that I don't understand the physics of the "vacuum tube" transistor. I'll probably google it in a few days, but if you made a video about it, that would be awesome!
This channel is freaking great!
He didn't talk about transistors. He gave as historical reasons, but didn't tell why binary is STILL the industry standard. Is these reasons presented still aplicable today, even with transistors and newer technologies?
Either something is, or it isnt... Whats not to like?
I have the greatest respect for Professor Brailsford, and I realize that the subject of electrical noise may be too complex
to introduce on Computerphile.
However it is difficult to understand how base 2 can be justified for use in computers without a discussion of electrical noise and logic levels that represent the 0/1 of digital computers.
observerms
When you look at the first applications, the code breakers were dealing with morse or 5 bit radio telex code signals.
A complex multi-level system to do base 10 arithmetic simply wasn't required.
The messages were sent in extended alphabets, encoded by bits.
I found it bizarre to suggest optimising computers for decimal, inflicting unnatural complexity on every operation.
Didn't understand why today that we are having power consumption problems we are still not moving towards either bi-quinary or decimal? Anyone understand that?
Ternary would be an even better option. About 64% more efficient than binary for storing a random data set. (Base e is perfectly efficient but not much better than 3 and much much harder to apply)
Why not hexadecimal one line of hexadecimal is equal to 4 lines of binary
Why did he calculate log10 base 2 .
He just wanted to show of!
But on a serious note...to store one bit of data you need one basic memory component, namely flip flop and becuase we need to store, say 99, we need atleast 7 such basic memory blocks. But if somehow we have designed a basic memory block capable of having 10 states for representing 10 binary digits instead of current capability of 2 for binary then we would need only 2 basic memory blocks for storing 99; one for each digit. 7/2 = 3.5 which is slightly greater than 3.22 cause we always have natural number for counting. So our storing capacity and in fact entire binary based digital system would be atleast 3.22 times more bulky than its decimal based counterpart
As mentioned by him later in the same video it is to calculate the maximum number of bits required to represent any 'n' digit number in its binary equivalent. Log 10 base 2 is 3.22. The professor mentions clearly that if you multiply this value 3.22 with the number of digits 'n' of your decimal number and them take the ceiling value you would get the maximum number of bits required for its representation. For Example if you have a 2 digit number say 35 or 48 or 99(the greatest 2 digit number) then you require 2×3.22=6.44 and take its ceiling ie 7. So 7 bits or a 7 bit length binary number can represent all 2 digit decimal numbers. Similarly for 3 digit numbers say 999 the max number of bits for binary representation is 3×3.22=9.66 and then take ceiling of 9.66 ie 10. So a 10 bit binary number can represent all 3 digit decimals. Same for 4 digits and so on. Binary numbers reduce complexity of logic but as you can see increase the circuitry by a lot. For a 2 digit decimal number you have to use 7 times the circuitry for doing the same.
You could argue that binary is not in fact used for DSL transmission where there can be as many as 15 bits per symbol. They use a combination of voltage levels and phase modulation to encode more bits into each symbol.
Storage is one thing, but what about the logic itself, which is inherently binary? That would all have to be converted to base 5 or base 10 or whatever. I feel like that would be incredibly difficult, but maybe I'm missing something.
Binary is easily worked at hexadecimal and octal. Hexadecimal is needs less digits
0xFF(255) vs 99 (0x63) and octal is fairly close 99(143 oct) 77 oct (63)
But now the question becomes this: if you're using multiple analog voltage separations to encode the values 0-4, how do you *store* those values? Currently we can store data because we can turn things on/off, or reorient magnets north/south, etc. How would you do that with 5 possible states, or worse, 10?
*****
That works for large-scale machines, but how could you do it fully electronically so it can be miniaturized enough for laptops, tablets, phones, or even just desktop PCs?
*****
That's what I figured XD I just wondered why that wasn't addressed in the video, since it's a very important reason to use binary.
+Joe Mills Multiples of 2 is what makes sense given the computer systems we have that uses the storage.
If you build a 5 level system, you're of course free to use only 5 of the 8 levels in TLC flash (or build specific 5-level flash).
What I replied to was the question of how to store multiple levels in memory, nothing about existing such memory not being binary based.
Michael Tempsch
But the entire point of the video was "why don't we use something other than binary?" To say "you could do it by using parts of binary" is redundant.
+IceMetalPunk As stated, you don't have to use 'parts of binary.'
You could design a specific 5-level flash memory - the tech is there, currently up to 8 levels.
Given this question in your original post: "Currently we can store data because we can turn things on/off, or reorient magnets north/south, etc. How would you do that with 5 possible states, or worse, 10?", I pointed to a current technique that actually does this. I fail to see how the basic technique must be disqualified because it in current implementations uses a number of levels that is a power of 2.
Hi Computerphile, could you guys do some detailed explanations about instruction sets, and show the difference in detail between CISC and RISC?
But what about trinary?
Wow! 🤩 Awesome explanation! 🥳🎉👍🏽💻📱🖨⌨🖱
Let's say we build a hexadecimal (or any other base>2) computer. For every value we then have to differentiate between 16 distinct 'positions'. If we can build technology that precise then we can also (in most cases) build technology where those hexadecimal numbers are replaced with 4 binary bits 1/4th the 'size' each (or whatever measurement is relevant). Since bits are simpler and easier to read they are readable even at smaller sizes. Since a lot of operations (like logic gates) are naturally done with binary it is easier to build a binary computer. And that is why binary is the favoured number system for electric computers.
Did someone find a better alternative so far? or even tried to ? just out of curiosity
There were some attempts at ternary computers in the Soviet Union, but circumstances of the Cold War led them to be scrapped in favour of stealing binary systems from the West to save resources and research time.
Why not trinary then? Closer to "e", more effective energy wise.
Why not Use Ternary?
I wonder... with our current manufacturing and fabricating abilities, is making a decimal computer system still THAT inefficient compared to binary anymore? I mean yeah it might be a little bit, but considering how small we can make things, how efficient on power they are, it has to be somewhat plausible. I'd love to see that as an exploration of our computing abilities to see if perhaps there is a better way to, well, computer, from the ground up.
I love these videos. Fascinating stuff! Thank you!
riveting stuff. more please computerphile!
Babbage should have invented the CNC mill. The textile industry had already began to embrace that kind of automation, this would not have been a foreign concept to him.
This needed an example of 0-4 as far as voltage goes. would they still be using +5 -5 and just detecting outputs @ 5,3,0,-3,+5? MUCH more info needed on how they decided it was "stable"
i can see computers splitting logical commands from calculator commands but.. i'm really concerned with the latency of transferring data between base 5 and base 2.
Also concerned that data storage wasn't mentioned. Please do a follow up.
In Soviet Russia they made trinary computer, it was much more efficient compared to binary. But unfortunately because money issues they didn't make it better, they started copying west. But the interesting 1 and 0 was they used 1, 0 and -1, which made negative numbers easier to express than in binary. also some calculating stuff was easier
There is *so* much Soviet tech with incredible potentials that we just straight up lost because of the pressures and constraints of the Cold War.
I'm not mad the Soviets lost. I'm mad at the science that never happened because they were forced to fight a war instead of spending time and resources on scientific pursuits for the sake of scientific advancement.
why didn't base 5 catch on?
But how would decimal logic gates even function?
Well, we've had analog computers before where precise voltages represent numbers and various circuits combine those voltages in different ways. At some point you have to take a measurement and record a number which will be accurate to whatever precision the machine allows. Logic would work along those lines, but the advantage of binary is the complete lack of ambiguity and the ability to determine the state of any bit with a single voltage threshold.
just move to dozenal logic
Wow.. What an amazing explanation.
This whole interview could have been boiled down to one word: transistors.
Correct me if I'm wrong,
But the MLC, TLC and QLC technology is doing kind of the same thing...
3:24 I'd also buy one... +Computerphile - Any chance of you branching into merch??
Because it is easier than quaternary when constructing logic gates and latches. But solid state storage uses multiple voltage levels.
I didn't get the end of the video. So why isn't bi-quinary used in modern computers?
Why not mention anything else, like balanced ternary? The question "why binary" is not answered in this vid. It's seems more like a trailer for a vid that would actually answer the question, with a couple interesting tangental facts tossed in.
How frickin' steampunk would a phone with thermionic valves look?!
Have you guys done a video on analog computers already?
Binary is easy to build rather than other number systems. is it?
2:20 I think he meant anode.
Well why not use hexadecimal it’s shorter than decimal
Probably should mention Binary Coded Decimal. That's where you use 4 bits to encode each decimal digit, and just don't use 0b1010, 0b1011, 0b1100, 0b1101, 0b1110, 0b1111.