Thanks for watching! Check the description for links to the other episodes I've made about different bases.. Also to note: a lot of people seem offended about the laptop destruction. The computer I used as a prop was completely non-functional. I would have had to dispose of it in any case, I just had fun using it as a prop first.
0:00 Laptop damaged 0 times! 0:21 Laptop damaged 1 time! 7:09 Laptop damaged 1T times! 15:43 Laptop damaged 10 times! 16:44 Laptop damaged 11 times! 16:55 Laptop damaged 1TT times! This list is a dynamic list. You can help by expanding it.
Electrically, binary has a higher resistance to noise than ternary. This is a very big deal when your computer memory isn't perfect or you're sending information a long way. When people were figuring out telegraphy they tried many different encoding schemes. Balanced ternary has a bunch in its favor. It just couldn't go as far as binary. That's why we ended up with the on/off of Morse code.
@Peter Bonucci With Morris code, would not a (I'm not sure on the name) "pseudo-ternary" have reduced mistakes more? A "short/dot," a "long/dash," and a "pause/space." Would this not have reduced ambiguity in messages, or was there too much change of a misinterpreted "pause" for "end?"
@@scaper8 There was too much of a chance of confusion. When you have levels of +/0/-, 25% of the voltage range gets assigned to +. When you use binary, (call them +/-) 50% gets assigned to +. Where distance is everything (e.g. undersea cables,) you simply cannot give up the noise margin. I have seen ternary used in fiber optic cable, but that was a specialized application.
Well, yes, and it would be more popular if our maths educations weren't so lame, too. People would probably find the constructive reals more intuitive, and we might broadly live in a world of less rounding errors. But it's not all roses. Bits are used for logical values, too, and the larger the base the bigger the logic tables. Three valued logics are sometimes useful, but seem harder to work with. Bitmaps become less dense. I don't know what the impact is going to be on encoding for storage and network transmission, but it's going to be huge. Another idea if you're interested in using this sort of idea for arithmetic on contemporary hardware is to use balanced 2^n-1 for bigger n, e.g. balanced 255 or 65535. Then you'll get to use the vector arithmetic at very close to full density, and broadly get better utilisation of binary hardware.
that's very cool to know, while watching i was imagining myself discovering an alien society using this method because i watched some of those videos that are like "what if the aliens use a different system of [doing a specific thing, like math] to ours?" and i finally found one and it's gorgeous
@@nolotilanne "ah, yes: a 2 am search on the internet", thank you! after reading what it is, i understand what you mean with "failure". honestly for me it's pretty ugly when mathematicians could just say "0 doesn't appear in any number, except on the number 0", like treat it like a NaN and go do something else. but then there are even worse things like "100 is written 9A in this new base 10" which loses the reason why we use todays numeric systems ("a*10^0 + b*10^1 + c*10^2") which is uglier
@@nolotilanne...but then if we use this bijective system only and only for computers, the "problematic" of the digit 0 wouldn't exists, because, for example, the number 0 itself would be represented in only one way; 0000. i guess there are always those problems about compatibility with other (older/newer/different) systems that can be relegated to an algorithm. but balanced ternary would be better, i assume since domotro made a video about them and not the others
I think the reason we keep using binary isn’t because of the physical manufacturing logistics, but the fact that all out algorithms are built with binary in mind. A binary circuit can still use ternary by using 2 bits per… Trit? Trigit? The fourth symbol could be used for special purposes in some contexts, or even be the same as 0 to allow for “lazy normalization”. But even without the history of base 2 in computing, most “divide and conquer” algorithms work naturally with base 2, since you want to divide your problem into the least number of chunks (i.e. 2) for efficiency reasons.
The main reason is that CMOS logic is easiest with high/low signals. Switching to balanced ternary requires a new signal (-V) and all the routing therein, plus logic gates are more complex.
@@defenestrated23 more complex is an understatement. just thinking about how a XOR would work on "balanced ternary" already makes my mind blow. making operations becomes f*cking harder. when you sum two bits (A, B) in binary the highest bit is (A AND B) while the lowest bit is (A XOR B) easy as it can be. but in balanced ternary... what would be the logic gates to do it? that's a nice problem to try. if somebody reads this and desires to prove us wrong. please, do it.
@@WilliamWizer-x3myou have more complexity per digit, but you need fewer digits for the same numbers. 64 binary bits is equivalent to 40 trits. If you need 50% more circuitry to handle three digits instead of two, it still comes out ahead. 40 digits × 3 states each < 64 digits × 2 states each. It's called radix economy, and base 3 is the optimal base by this metric. Binary logic has six different operations that are commutative and associative, like XOR, AND, etc. In ternary you'd have a larger selection of different operations to choose from. A similar principle to radix economy applies. Because of the larger choice of operators, you can specify any logic with fewer operations.
you can also do balanced bases for all natural numbers greater then 2, not only the odd ones. For an n-digit balanced numbersystem, just take 0 and the (n-1)st roots of unity (in the complex plain) for n=4 (or n=7) it is very beatuful, as it lets you calculate in the Eisenstein-Integers very easaly.
I can't even pretend that I am at a high enough math and physics level to understand the uses of this, I'm sorry. Although I do like it when numbers do pretty things, so maybe I'll look into your recommendation!
@@jazermano do you know the gaußian integers? they are just the complex numbers with integer entrys vor both, real and imaginary component. They form a 2-D Grit made of squeres. but it turns out, the squere is not the only polygon, that tiles the plain, the triangle (and hexagon) do as well. so if you draw a triangular grit und the complex numbers, where -1, 0 and 1 are on the grit, aswell as 4 other points on unit circle, which form a hexagon with -1 and 1, all the crossings in youre triangular grit are called Eisenstein Integers and they behave pretty similar to the integers and gaußian integers, but also differ in a number of ways. however i find them quite pretty. they even have unique prime factorizations and some naturel prime numbers are not in the Eisenstein integers. really interesting Ring!
@@jazermano Balanced based ten is pretty cool. You have 0 to 5 and -1 to -5 (often written with bars on top). You only need to learn your five times table and how to multiply the bars.
i can't believe i watched the whole video. super tired, and thought: ok, just a preview. but so interesting, kept me in suspense: how would that work, then? ... and very well explained in the end.
The biggest reason why binary is used in computers as opposed to ternary (balanced or unbalanced) or even biquinary is because of it's electrical simplicity. You have two voltages your gates need to maintain: +5V and 0V. If there were any voltages greater than 5V they would be treated as 5V and therefore ON, and any voltages less than 0V would be treated as 0V and therefore OFF. When you add other voltages in the middle, things get tricky because you would need to ensure circuits at the middle voltages don't "drift" into one of the outer voltages. Any amount of inductive or capacitative interference, or a power flicker, could cause some node at a middle voltage to waver toward one of the outer voltages, which can corrupt the data if later circuits incorrectly read the value. Also, Donald Knuth's last name is two syllables: kuh-nuth. You pronounce the K separately
Using -5, 0, and 5 volts, I can definitely see problems if you needed to transition a place from + to -, or vice versa. This is one of the reasons I prefer Gray Codes for encoding numbers because when incrementing or decrementing, your counter is off by at most one if it is caught in a transition. I'd really like to know more about how you'd do arithmetic in a base like this. How do you preform basic operations on it? What do truth tables look like for boolean operations? How do you derive lambda calculus or something like Church Numbers from 1st principles?
@@__christopher__ because the tongue can't transition between "k" and "n" without an open intermediate state and the "n" is voiced, so the voice has to start somewhere, and there has to be an aspiration after the "k" to hear it.
In the case where your testing weights can only go on one side, you donʼt actually need a 1 weight. Suppose you want to measure something that weighs 35 units. Then you can use the 32 and the 2 to see it weighs *more* than 34, and the 32 and the 4 to see it weighs *less* than 36, and youʼre assuming it has an integral weight, so it has to be 35. This doesnʼt work in the balanced ternary case, because 35 = 27 + 9 - 1, and without the 1 you could just tell it was less than 36 but more than 33, leaving 34 or 35.
Done some research on ternary computers myself, and the main reason, why binary computers dominate today, seems to be that back when computers were a new area to be explored and ways of making reliable binary computers have just been discovered, there were no cheap and reliable parts for ternary computers invented yet. After the binary computers started to be mass produced and became the norm, more and more people just started to think/accept that binary was the way of doing things, resulting in less people researching other methods of making computers, which in turn just reassured the dominance of binary computers.
Modal Ternary is the only way to go. Having a system intrinsically treat the 0 1 or -1, 0 1 or 2, and 0 1 or !2. !2 is the superposition state you see in quantum computers, where it will end up being 0 or 1 when read later for emulating a quantum system. In addition to potentially higher density of data storage by 50%, there's interesting cases where you can compress data natively in a system like that, for example if you are storing a binary program you could include metadata with the 3rd component, or supermeta data with the mode on a trit by trit basis. Also can increase calculation speed for thirds, when binary is better at a calculation could just do it the old way.
I'm glad the big clock wasn't destroyed, because it is a thing of beauty and craftsmanship, and it's sound will be glorious, and... I would really like to have one of those. Also starting to get the hang of three, thanks to thee.
The most efficient base for coding is actually about 2.71828... Euler's number. Though it has some practical difficulties with implementation. The Soviet Union built some ternary computers in the 1960's. I think the US built a few too. It much easier to build and use a binary computer because the threshold for a bit error is higher. When you only have two possible voltages on a wire for zero and one, and there's some noise, or too much resistance in a switch, etc. the noise must be more than 50% of the voltage threshold to cause an error. In a ternary computer, the noise only needs to be more than 33% off from one of the three possible voltage values. Also, it's easier to just switch something 100% on, or 100% off with a microscopic transistor that's just a few atoms of conductors and insulators stacked on top of each other. Compact Discs are a 2.8MHz analog FM signal, from which an error-free 44.1KHz digital is produced. The entire point of using digital, binary, encoding is for signal to noise ratio. Shanon invented digital encoding for AT&T (Bell Labs) to eliminate the noise on long distance telephone calls. Continuous analog signals have lots of problems with noise, and the more symbols you add per signal event, the more and more your signal resembles analog again. (Digital has ridiculously large bandwidth demands, but it is worth it to eliminate noise entirely.)
Oh yes ,now i recall they had that special cubic nondestructive ferrites memomry,core memory :) it wouldnt lose memory at read time. It had special name, that ciubic ferrites with two wires:)
@@b43xoit Wow - i havent even considered that :) - thanks must be true or at least undefinite :) :). BTW i found out that the soviet tri-value logic machine was called "SETUN" - it was not related though with non-destructive read BIAX ferrite memory (BESM - MESM machines "STRELA") but they were competitors in the era. There was also an emulated logic computer TERNAC on some Burroughs B1700.
But the maximum noise can be 25% because the middle voltage can be too high or low. Lets say that the modes are 0, 50 and 100. Noise can only be half the difference equaling 25 so the max noise for three modes is 25%.
Some fine, fine filmography demonstrated here, on-top of the interesting info. I want a Demotro + Stephen Wolfram video collab. Can we make that happen, internet?
Number bases are a nice topic to work with, I once proved that balanced ternary works by simple induction, I still have the proof somewhere on my Overleaf
Not an answer to your question but a related fact: in computer hardware design, specifically in Verilog, a "logic" or "wire" data type (representing a single hardware bit) has *four* possible values, not two: 1, or "on"/"high" 0, or "off"/"low" Z, or "high-impedance", meaning that nothing is driving it to set it to 1 or 0 X, or "unknown", which can mean various things - but in general it means either it wasn't initialized or it is driven simultaneously to both 0 and 1 by two different sources.
Or as mentioned by a previous commenter, it's like quantum computing, where the three states of a bit are State A, State B, and State C.: coherent superposition of both states simultaneously. Which is a lot like, if not exactly what 0 actually is eh?
Hmm... a tricky representational problem, how to represent the 'digits' of a balanced base in a way that is both clearly representational, but also a readily accessible key on a modern keyboard. zero can be any of 0, O, or o, depending on how things look with our other symbols. For ternary, we have single-key access to + and -, but those are mathematical operations, so we might look for other options as well. > and < could work but again they are mathematical symbols already. ^ and v resemble arrows, which gives some interesting possibilities, but the carrot is often used for exponentiation already. As we go up to balanced base-5, if we go with the 'arrows' motif, we could use M and W (double up arrow, and double down arrow). We could even go all-in on a letter representation with M, A, O, V, and W as the 'digits'. This suggests using A, O, V as the 'digits' for balanced ternary would work well. When we get to balanced base-7 and higher, we may have to abandon representational digits at that point in favor of the abstract.
The individual logic elements of even the type of computer described here are still binary, either "on" or "off'. However, I've sometimes thought about digital computers whose "trits" would be represented by voltage -1, 0, or +1. But the problem with that is coming up with *digital* logic elements that can distinguish between a difference of 1 volt and 2 volts, which would be the case with a +1 and -1 input. Nothing I was able to think of from TTL to RDL would seem to work unless you cheat by breaking it down into binary-crunching sub-circuits. I might more easily be able to think of hardware as Robert Heinlein suggested in _The Number of the Beast_ that represented ternary as 3-phase AC.
you don't need 1 volt to represent the unit, you just need relatively negative and positive values, those might be 3.3v and -3.3v or 1.5v and -1.5v (1.2, 1.3, 5 whatever voltage you want that's in production for memory and logic circuits already). the chips already work in binary to sense from 3.3 to zero. but they're also relative, a 6.6v on the positive side and 3.3v on the ground is still 3.3v of difference. No digital circuit cares about the absolute voltage, just about the difference, it might as well be 4.2v on the positive side and 0.9v on the ground side. Also, you're wrong, digital logic elements inside ram modules today run at about 1 volts.
That Rube Goldberg action ath the beginning was brilliant! Hope to see that stunt-laptop back in another episode! maybe it even gets some burn marks, one day? 😄
The use of "T" to mean -1 is really just a variation on 1 with a bar over it, like in Boolean algebra where a bar over a variable or expression means "not".
in upcoming graphics cards they apparently use something called "PAM-3" ua-cam.com/video/Kn4wJYwQTto/v-deo.html that also has -1, 0 and 1, but not sure if that's the same thing, they use pulse modulation or like different voltages, not the direction of the current I think
@@deinauge7894 good call on binary.. I guess I was focusing on computer logic and not thinking correctly. As for quantum superposition, I was under the assumption there was an on position, an off position and a both on and off position. I guess I'll have to look into it more to see exactly how many positions it has.
@@MatthewConlisk the number of superposition states is infinite. you can imagine it as an arrow that can point in any space direction (and an additional phase that is similar to the starting angle when you imagine the arrow rotating around its axis). I know this picture has its flaws - but it is as close as it gets. So a superposition is not just "both on and off" but any direction that is not exactly upward or downward. And the picture i gave also clarifies that it depends on your point of view what "on" and "off" mean, and which states are pure (not superposition) states...
The dominance of binary isn't due to manufacturing quantity, but rather the underlying physical technology being used to represent values. If you were to make a chip that uses balanced trinary, it would store two bits and not use one of the possible combinations. Flip flops have two states. DRAM was designed and matured to match that. Logic gates are built out of semiconductor transistors that have two states. Could you start with a semiconductor switch that could be off, positive voltage, or negative voltage? I don't know. But it would be more complex to have to carry another power bus everywhere.
I wish you had drawn a tree diagram of how to represent numbers in balanced trinary, because it's a lot more intuitive than it looks, the point where you jump up one digit is just in the middle of the base. I also missed the fun fact that the automatic rounding results in (negative powers) of two having no finite representation, ending in either 11111... or TTTTT... which gives us an unusual insight into why 0.999... is equivalent to 1.000... in our decimal system.
@@CjqNslXUcM I almost included that fun fact about the number 1/2 doing that trait you mentioned, but I saved it because I'm going to make a whole episode sometime in the future about that type of topic (certain numbers having multiple representations, and which ones might have that in different bases)
@@ComboClass i claimed it was negative powers of two that cause this double representation, howerer that is wrong. I think the number needs to have a prime factorisation such that 2^(-1) × 3^(x) × and_other_factors(y), where x is any integer and y is any natural number.
Two trits can represent a digit in balanced base nine. I advocate this for human use. It is close to 10, so not very foreign to our ways of thinking, in terms of place value. It would eliminate pricing that ends with a row of nines, like nine dollars and ninety-nine cents, plus would bring all the other advantages of balanced numerals, including that truncating is the same as rounding off.
@@gljames24 But those aren't necessarily all useful. W'pedia shows tables called "^" and "v", but doesn't explain how they correspond to the same-named binary operators for bits. I suppose they are gates from which an ALU can be built.
I think you mean binary operations (here "binary" refers to having two arguments, regardless of the number of values those arguments may take). Boolean logic by definition has only two truth values, and boolean operations are operations of boolean logic.
This was really interesting, far more interesting than binary and hexadecimal. During a brief time I spent learning how to program a particular kind of PLC I had to learn about BCD. Is there any interesting math about BCD? (because aside from the difference in character length and storage methods within the computer it is just base-10 expressed another way.)
binary has 16 logical operators, ternary has hundreds - each one of these might express concepts and algorithms that would otherwise be complex, hard to think and speak about using base 10 and binary. a few people's intuition point to some of them being advantageous in computing.
Enjoyed the vid, but had a question. This doesn’t seem scalable if you follow Moore’s principle. How would you account for the insulation from interference between transistors with the added need to read charges? It seems this has a small niche use case for now unless technology develops a solution. Love the content and hope my question makes sense!
You wouldn't have to include negative coins to get the benefits of pricing using balanced base nine. Under the current regime with decimal, marketers love to price products and services at a point where truncating is a lot different from rounding off. For example, if a doughnut or something is priced at $2.99, the mind of the prospective buyer says, "wow, that's only about two dollars". Balanced numerals would take that marketing ploy out of the toolbox and so buyers would have it easier in understanding prices.
@@b43xoit it's actually to force employees to open the cash register in order to track whenever a cash purchase is made, making it harder for an employee to just pocket the bill for themselves.
The big reason we use binary in computers is that turning transistors on or off is really easy. If you want to have more states, there are a lot of ways that the real world starts getting in your way really quickly. Probably the most problematic is that since power is voltage * current, it's reasonably easy to keep power dissipation near 0 as long as you keep either voltage or current near zero. Once you start setting up states where both of them are significantly above zero, your transistor starts heating up... lots.
balanced ternary requires positive voltage, negative voltage, and 0 voltage. from an electronics perspective, that's still digital, no need to mess with analogue voltages, ADCs and DACs. although, AC is balanced ternary if the magnitude is considered the positive and negative unit, this might also be useful.
You don’t have negative coins. The coins would be powers of three. For each type of coin, +1 is one person giving the other one of the coins and -1 is the opposite person giving one of the coins, and 0 is neither. These can combine with the powers of three to be + or - any integer
@@ComboClass Makes sense! I don't know why I was thinking of it this way. Could negative coins even be a thing? Thanks for the explanation and response. Your videos are very entertaining and your explanations are extremely well thought out.
@@zeitgeistcowboy Thanks! And theoretically negative coins could describe debt, but like you said, if people had their own negative coins they could just throw them away or "lose" them haha. If you count digital currently, then owing a bank money is sort of like negative coins
what's really interesting is that most prominent monetary systems have mixed bases. Pennies, Dimes, Dollars, Tens, and Bens, work on base ten, but what the heck are Nickles, Quarters, Fins, Jacksons, and Fifties doing in the mix?! Not to mention 50cent pieces...I mean it all makes sense from a practical stand point, but it's never been very mathematical.
I see. It’s not so unusual. Roman numerals used something similar to represent their 4s and 9s. This is the same concept but applied to the smallest base that allows it.
If a balanced ternary for decimals means it's the same as rounding in your regular base 10, does that mean ternary doesn't offer any benefit when it comes to expressing floating points? The biggest problem for someone like me in binary is the fact that we have floating point errors in single precision crop up pretty quickly. Double precision is better but the inherent binary limitation on expressing it still exists, it just gets pushed further out. I'm terrible at maths so would appreciate further insight.
I very much enjoy your math content. But your running gag of dropping and breaking stuff keeps bugging me. When you put up the computer in this one my immediate reaction was a toe-curling "oh no". And rightly so. I am not complaining. Just sayin... Thank you!
@@ComboClass I thought so. And even if, you are perfectly entitled to breaking your own stuff. I have a friend who keeps her tablet in a case that makes it unbreakable. And she regularly throws it across rooms to make a point. So far the claim has held true. But I just can't cope :-).
I don’t know anything about that brand. You’re the second comment which recognized it which surprised me. I got the cooler from my family who had it from a while ago
I'm not seeing how truncating is the same as rounding... If the digit I'm cutting off is 1, don't I need to make the lowest place I'm keeping "more positive"? T->0 0->1 1->0+carry ? And if it's T, the next place needs to be made more negative.
0.TT0011 -> 0.TT001 vs 0.110111 -> 0.11011 (trunc.) vs 0.11100 (round.) In binary 1.1 (1½) is exactly in-between 1.0 (1) and 10.0 (2) but in balanced ternary 1.1 (1⅓) is closer to 1.0 (1) than to 1T.0 (2); and 1T.T (2-⅓=1⅔) is closer to 1T.0 (2) than to 1.0 (1) so rounding is the same as truncating
When I was at a Computer Museum, I was "designing" a balanced ternary computer. I got around to deciding that I'd have a 27 trit instruction size. With 9 trit trytes. For boolean functions I thought of shifts of trits, or 3-trits, or trytes. Hmm.. maybe we should call them tits so 3-tits to a trit? And masking: + passes whatever, 0 results in 0, and - would negate whatever. Probably have 9 working registers too. That's as far as I got -- thank you Covid.
@@b43xoit That's a good question. I'd think a ternary person would want to still think in groups of 3. So, 27 states would be quite a trick for a single symbol. And the reason to group the trits is to use less symbols. I prefer the + 0 - set. I suppose D C B A 0 a b c d could work for two trits, or W X Y Z at the end to be more obvious.
Thanks for watching! Check the description for links to the other episodes I've made about different bases.. Also to note: a lot of people seem offended about the laptop destruction. The computer I used as a prop was completely non-functional. I would have had to dispose of it in any case, I just had fun using it as a prop first.
that looked like a legit accident, very good physical comedy with the laptop 10/10 would destroy again
It reminded me of "Joseph's Machines". He's always creating contraptions and destroying things like laptops in the process.
I thought that was obvious. Just saying... even though I struggle with the maths, I'm not the dumbest person around.
0:00
Laptop damaged 0 times!
0:21
Laptop damaged 1 time!
7:09
Laptop damaged 1T times!
15:43
Laptop damaged 10 times!
16:44
Laptop damaged 11 times!
16:55
Laptop damaged 1TT times!
This list is a dynamic list. You can help by expanding it.
11:06
Did you mean we're allowed to damage our own laptops, to expand the list? Great idea! Here's another go: 💻 🔨
17:00 fixed in the drawer
hmm yeah i shouldve expected a heart...
CORRECTED VERSION
0:00
Laptop damaged 0 times!
0:21
Laptop damaged 1 time!
7:09
Laptop damaged 1T times!
11:06
Laptop damaged 10 times!
15:43
Laptop damaged 11 times!
16:44
Laptop damaged 1TT times!
16:55
Laptop damaged 1T0 times!
J:AG
Laptop damaged 1T1 times!
@@asheep7797 In Combo Class, we use Combo Class notation system: 0, ↿, ↿⇂, ↿0, ↿↿, ↿⇂⇂, ↿⇂0, ↿⇂↿.
The way you made the 1 and -1 look like up and down arrows made me think of the spin of elementary particles
Or just a vector:)
Combo Class: 50% education, 50% things falling and breaking
100% outdoors!
It's a crash course: Things are crashing all the time during the course.
And 10% squirrel.
@@BooBaddyBigAD-squirrel
Undertale humor@@__christopher__
Balanced bases do be wacky. I sure do wonder if balanced ternary could have been more popular if binary didn't emplace itself so strongly.
Electrically, binary has a higher resistance to noise than ternary. This is a very big deal when your computer memory isn't perfect or you're sending information a long way. When people were figuring out telegraphy they tried many different encoding schemes. Balanced ternary has a bunch in its favor. It just couldn't go as far as binary. That's why we ended up with the on/off of Morse code.
Theoretically balanced ternary is better, physically binary is better
@Peter Bonucci With Morris code, would not a (I'm not sure on the name) "pseudo-ternary" have reduced mistakes more? A "short/dot," a "long/dash," and a "pause/space." Would this not have reduced ambiguity in messages, or was there too much change of a misinterpreted "pause" for "end?"
@@scaper8 There was too much of a chance of confusion. When you have levels of +/0/-, 25% of the voltage range gets assigned to +. When you use binary, (call them +/-) 50% gets assigned to +. Where distance is everything (e.g. undersea cables,) you simply cannot give up the noise margin.
I have seen ternary used in fiber optic cable, but that was a specialized application.
Well, yes, and it would be more popular if our maths educations weren't so lame, too. People would probably find the constructive reals more intuitive, and we might broadly live in a world of less rounding errors. But it's not all roses. Bits are used for logical values, too, and the larger the base the bigger the logic tables. Three valued logics are sometimes useful, but seem harder to work with. Bitmaps become less dense. I don't know what the impact is going to be on encoding for storage and network transmission, but it's going to be huge.
Another idea if you're interested in using this sort of idea for arithmetic on contemporary hardware is to use balanced 2^n-1 for bigger n, e.g. balanced 255 or 65535. Then you'll get to use the vector arithmetic at very close to full density, and broadly get better utilisation of binary hardware.
this channel is a perfect mix of Tom & Jerry level slapstick and fantastic educational content
This i my favorite number base.. Just the number of "not so boolean" operations you can do on balanced ternary digis..
that's very cool to know, while watching i was imagining myself discovering an alien society using this method because i watched some of those videos that are like "what if the aliens use a different system of [doing a specific thing, like math] to ours?" and i finally found one and it's gorgeous
nice
let us know how first contact went /j
@@nolotilanne "ah, yes: a 2 am search on the internet", thank you! after reading what it is, i understand what you mean with "failure". honestly for me it's pretty ugly when mathematicians could just say "0 doesn't appear in any number, except on the number 0", like treat it like a NaN and go do something else. but then there are even worse things like "100 is written 9A in this new base 10" which loses the reason why we use todays numeric systems ("a*10^0 + b*10^1 + c*10^2") which is uglier
@@nolotilanne...but then if we use this bijective system only and only for computers, the "problematic" of the digit 0 wouldn't exists, because, for example, the number 0 itself would be represented in only one way; 0000. i guess there are always those problems about compatibility with other (older/newer/different) systems that can be relegated to an algorithm. but balanced ternary would be better, i assume since domotro made a video about them and not the others
I love balanced ternary.
You know how they say that there are +- types of people...
OH MY GOD YES! I love balanced ternary so much, I've been waiting for this video for a long time!
I think the reason we keep using binary isn’t because of the physical manufacturing logistics, but the fact that all out algorithms are built with binary in mind. A binary circuit can still use ternary by using 2 bits per… Trit? Trigit? The fourth symbol could be used for special purposes in some contexts, or even be the same as 0 to allow for “lazy normalization”.
But even without the history of base 2 in computing, most “divide and conquer” algorithms work naturally with base 2, since you want to divide your problem into the least number of chunks (i.e. 2) for efficiency reasons.
The main reason is that CMOS logic is easiest with high/low signals. Switching to balanced ternary requires a new signal (-V) and all the routing therein, plus logic gates are more complex.
"Trit" is the correct term.
@@defenestrated23 more complex is an understatement. just thinking about how a XOR would work on "balanced ternary" already makes my mind blow.
making operations becomes f*cking harder.
when you sum two bits (A, B) in binary the highest bit is (A AND B) while the lowest bit is (A XOR B) easy as it can be.
but in balanced ternary... what would be the logic gates to do it?
that's a nice problem to try.
if somebody reads this and desires to prove us wrong. please, do it.
@@WilliamWizer-x3myou have more complexity per digit, but you need fewer digits for the same numbers. 64 binary bits is equivalent to 40 trits. If you need 50% more circuitry to handle three digits instead of two, it still comes out ahead. 40 digits × 3 states each < 64 digits × 2 states each. It's called radix economy, and base 3 is the optimal base by this metric.
Binary logic has six different operations that are commutative and associative, like XOR, AND, etc. In ternary you'd have a larger selection of different operations to choose from. A similar principle to radix economy applies. Because of the larger choice of operators, you can specify any logic with fewer operations.
you can also do balanced bases for all natural numbers greater then 2, not only the odd ones. For an n-digit balanced numbersystem, just take 0 and the (n-1)st roots of unity (in the complex plain) for n=4 (or n=7) it is very beatuful, as it lets you calculate in the Eisenstein-Integers very easaly.
I can't even pretend that I am at a high enough math and physics level to understand the uses of this, I'm sorry. Although I do like it when numbers do pretty things, so maybe I'll look into your recommendation!
@@jazermano do you know the gaußian integers? they are just the complex numbers with integer entrys vor both, real and imaginary component. They form a 2-D Grit made of squeres. but it turns out, the squere is not the only polygon, that tiles the plain, the triangle (and hexagon) do as well. so if you draw a triangular grit und the complex numbers, where -1, 0 and 1 are on the grit, aswell as 4 other points on unit circle, which form a hexagon with -1 and 1, all the crossings in youre triangular grit are called Eisenstein Integers and they behave pretty similar to the integers and gaußian integers, but also differ in a number of ways. however i find them quite pretty. they even have unique prime factorizations and some naturel prime numbers are not in the Eisenstein integers. really interesting Ring!
@@jazermano Balanced based ten is pretty cool. You have 0 to 5 and -1 to -5 (often written with bars on top). You only need to learn your five times table and how to multiply the bars.
i can't believe i watched the whole video.
super tired, and thought: ok, just a preview.
but so interesting, kept me in suspense: how would that work, then? ...
and very well explained in the end.
The biggest reason why binary is used in computers as opposed to ternary (balanced or unbalanced) or even biquinary is because of it's electrical simplicity. You have two voltages your gates need to maintain: +5V and 0V. If there were any voltages greater than 5V they would be treated as 5V and therefore ON, and any voltages less than 0V would be treated as 0V and therefore OFF. When you add other voltages in the middle, things get tricky because you would need to ensure circuits at the middle voltages don't "drift" into one of the outer voltages. Any amount of inductive or capacitative interference, or a power flicker, could cause some node at a middle voltage to waver toward one of the outer voltages, which can corrupt the data if later circuits incorrectly read the value.
Also, Donald Knuth's last name is two syllables: kuh-nuth. You pronounce the K separately
Using -5, 0, and 5 volts, I can definitely see problems if you needed to transition a place from + to -, or vice versa. This is one of the reasons I prefer Gray Codes for encoding numbers because when incrementing or decrementing, your counter is off by at most one if it is caught in a transition. I'd really like to know more about how you'd do arithmetic in a base like this. How do you preform basic operations on it? What do truth tables look like for boolean operations? How do you derive lambda calculus or something like Church Numbers from 1st principles?
I really wish I had a computer in biquinary, just a few digits would be really powerful! Although the logic would be confusing.
You pronounce the K in Knuth, but I don't see why you would make it its own syllable. There's no vowel between K and n.
@@__christopher__ because the tongue can't transition between "k" and "n" without an open intermediate state and the "n" is voiced, so the voice has to start somewhere, and there has to be an aspiration after the "k" to hear it.
@@b43xoit Impossible? Millions of Germans do it every day.
In the case where your testing weights can only go on one side, you donʼt actually need a 1 weight. Suppose you want to measure something that weighs 35 units. Then you can use the 32 and the 2 to see it weighs *more* than 34, and the 32 and the 4 to see it weighs *less* than 36, and youʼre assuming it has an integral weight, so it has to be 35.
This doesnʼt work in the balanced ternary case, because 35 = 27 + 9 - 1, and without the 1 you could just tell it was less than 36 but more than 33, leaving 34 or 35.
Donald Knuth's latest book just came out Book The Art of Computer Programming 4B. Its basically a series of computer based math problems.
Pleased to learn he's still at it, was wondering the other day how he was doing
looking at some of your older videos, this man look like math eminem
0:18 What happens when you put a bad apple on a teacher's desk.
The ternary system may be balanced but the set up sure isn’t! Great video as always, well explained math and a bit of humor is such a great Combo
Done some research on ternary computers myself, and the main reason, why binary computers dominate today, seems to be that back when computers were a new area to be explored and ways of making reliable binary computers have just been discovered, there were no cheap and reliable parts for ternary computers invented yet. After the binary computers started to be mass produced and became the norm, more and more people just started to think/accept that binary was the way of doing things, resulting in less people researching other methods of making computers, which in turn just reassured the dominance of binary computers.
This is the first time I watch a video of yours. Loved the rudimentary set up and the crystal clear explanation!
Modal Ternary is the only way to go.
Having a system intrinsically treat the 0 1 or -1, 0 1 or 2, and 0 1 or !2.
!2 is the superposition state you see in quantum computers, where it will end up being 0 or 1 when read later for emulating a quantum system.
In addition to potentially higher density of data storage by 50%, there's interesting cases where you can compress data natively in a system like that, for example if you are storing a binary program you could include metadata with the 3rd component, or supermeta data with the mode on a trit by trit basis. Also can increase calculation speed for thirds, when binary is better at a calculation could just do it the old way.
There is not "the" superposition stae in quantum computers. There are infinitely many of them.
@@__christopher__ Yeah, it's not a quantum computer, it's an emulator. Like it'll get the wrong answer but the software won't crash kind of situation.
@@__christopher__ Think of a seedless random number generator. You write the data as !2 but when you go to read it, it'll be either 0 or 1.
9:27 Very cool editing
this is my favourite base, thank you for covering the topic.
I'm glad the big clock wasn't destroyed, because it is a thing of beauty and craftsmanship, and it's sound will be glorious, and... I would really like to have one of those.
Also starting to get the hang of three, thanks to thee.
The most efficient base for coding is actually about 2.71828... Euler's number. Though it has some practical difficulties with implementation.
The Soviet Union built some ternary computers in the 1960's. I think the US built a few too. It much easier to build and use a binary computer because the threshold for a bit error is higher. When you only have two possible voltages on a wire for zero and one, and there's some noise, or too much resistance in a switch, etc. the noise must be more than 50% of the voltage threshold to cause an error. In a ternary computer, the noise only needs to be more than 33% off from one of the three possible voltage values.
Also, it's easier to just switch something 100% on, or 100% off with a microscopic transistor that's just a few atoms of conductors and insulators stacked on top of each other.
Compact Discs are a 2.8MHz analog FM signal, from which an error-free 44.1KHz digital is produced. The entire point of using digital, binary, encoding is for signal to noise ratio. Shanon invented digital encoding for AT&T (Bell Labs) to eliminate the noise on long distance telephone calls. Continuous analog signals have lots of problems with noise, and the more symbols you add per signal event, the more and more your signal resembles analog again.
(Digital has ridiculously large bandwidth demands, but it is worth it to eliminate noise entirely.)
Oh yes ,now i recall they had that special cubic nondestructive ferrites memomry,core memory :) it wouldnt lose memory at read time. It had special name, that ciubic ferrites with two wires:)
Some sources suggest that the average power dissipation for computing with balanced ternary is less than that for binary.
@@b43xoit Wow - i havent even considered that :) - thanks must be true or at least undefinite :) :). BTW i found out that the soviet tri-value logic machine was called "SETUN" - it was not related though with non-destructive read BIAX ferrite memory (BESM - MESM machines "STRELA") but they were competitors in the era. There was also an emulated logic computer TERNAC on some Burroughs B1700.
But the maximum noise can be 25% because the middle voltage can be too high or low. Lets say that the modes are 0, 50 and 100. Noise can only be half the difference equaling 25 so the max noise for three modes is 25%.
@@Jus51 oh yeah!
I only knew Balanced Ternary from the sequence on the OEIS, I didn't realize it had an actual application lol
Very nice explanation of how binary works! I will be stealing this way of explaining it when I need to explain it to people in the future.
Some fine, fine filmography demonstrated here, on-top of the interesting info. I want a Demotro + Stephen Wolfram video collab. Can we make that happen, internet?
I remember looking this up years ago. I can only imagine how crazy ternary qam would be.
Loved the chaotic vibe! Subbed
That poor laptop
Number bases are a nice topic to work with, I once proved that balanced ternary works by simple induction, I still have the proof somewhere on my Overleaf
Fascinating! You forced me to watch to the end.... learning stuff and laughing all the way!
Your accidented video reminds me those fom ElectroBOOM, but with less electrocutions. I'm so glad I discovered your channel.
In this episode, Domotro approaches Neil Breen levels of laptop abuse. Also something about numbers idk I'm not a biologist.
"But what about the True/False-iness of Binary? What's the third option gonna be, 'I don't know?' LOL"
Yes.
Yeah there are forms of “ternary logic”, one of which has “true”, “false” and a 3rd state representing something like “unknown”
Not an answer to your question but a related fact: in computer hardware design, specifically in Verilog, a "logic" or "wire" data type (representing a single hardware bit) has *four* possible values, not two:
1, or "on"/"high"
0, or "off"/"low"
Z, or "high-impedance", meaning that nothing is driving it to set it to 1 or 0
X, or "unknown", which can mean various things - but in general it means either it wasn't initialized or it is driven simultaneously to both 0 and 1 by two different sources.
Or as mentioned by a previous commenter, it's like quantum computing, where the three states of a bit are State A, State B, and State C.: coherent superposition of both states simultaneously.
Which is a lot like, if not exactly what 0 actually is eh?
domotro/combo class best small math channel
Love the chaotic energy. Great explanation.
Great video ! Btw folks you definitely should try watching it at 1.5x speed, just perfect
Hmm... a tricky representational problem, how to represent the 'digits' of a balanced base in a way that is both clearly representational, but also a readily accessible key on a modern keyboard.
zero can be any of 0, O, or o, depending on how things look with our other symbols.
For ternary, we have single-key access to + and -, but those are mathematical operations, so we might look for other options as well. > and < could work but again they are mathematical symbols already. ^ and v resemble arrows, which gives some interesting possibilities, but the carrot is often used for exponentiation already.
As we go up to balanced base-5, if we go with the 'arrows' motif, we could use M and W (double up arrow, and double down arrow). We could even go all-in on a letter representation with M, A, O, V, and W as the 'digits'. This suggests using A, O, V as the 'digits' for balanced ternary would work well.
When we get to balanced base-7 and higher, we may have to abandon representational digits at that point in favor of the abstract.
The individual logic elements of even the type of computer described here are still binary, either "on" or "off'. However, I've sometimes thought about digital computers whose "trits" would be represented by voltage -1, 0, or +1. But the problem with that is coming up with *digital* logic elements that can distinguish between a difference of 1 volt and 2 volts, which would be the case with a +1 and -1 input. Nothing I was able to think of from TTL to RDL would seem to work unless you cheat by breaking it down into binary-crunching sub-circuits. I might more easily be able to think of hardware as Robert Heinlein suggested in _The Number of the Beast_ that represented ternary as 3-phase AC.
Check the design of the Setun computer.
you don't need 1 volt to represent the unit, you just need relatively negative and positive values, those might be 3.3v and -3.3v or 1.5v and -1.5v (1.2, 1.3, 5 whatever voltage you want that's in production for memory and logic circuits already). the chips already work in binary to sense from 3.3 to zero. but they're also relative, a 6.6v on the positive side and 3.3v on the ground is still 3.3v of difference. No digital circuit cares about the absolute voltage, just about the difference, it might as well be 4.2v on the positive side and 0.9v on the ground side.
Also, you're wrong, digital logic elements inside ram modules today run at about 1 volts.
@@PaulSpades That's what my example said.
When I heard minimal units used I immediately thought of the primes.
That Rube Goldberg action ath the beginning was brilliant!
Hope to see that stunt-laptop back in another episode! maybe it even gets some burn marks, one day? 😄
yayy balanced ternary is my favorite numbering system and the weight analogy really helped me understand it and number systems in general :::)))
I speculate that balanced base nine would fit to human thinking better. And it could represented on a computer with two trits.
The use of "T" to mean -1 is really just a variation on 1 with a bar over it, like in Boolean algebra where a bar over a variable or expression means "not".
Nice hint, but using upsidedown 1 was very satisfying for negation. I'm glad he used that.
It would be great to see examples of computers that use this system
Wikipedia has some examples.
well balanced show...
My heart breaks for that poor laptop
I recently discovered that electronically, circuits use +1 and -1, not 0.
A buffer holds a +1 or -1, but a tri-state buffer can also hold a zero.
in upcoming graphics cards they apparently use something called "PAM-3" ua-cam.com/video/Kn4wJYwQTto/v-deo.html that also has -1, 0 and 1, but not sure if that's the same thing, they use pulse modulation or like different voltages, not the direction of the current I think
I’m 34 seconds in, and you’ve already destroyed an apple. Take that Tim Cook, ya bastard!
You have my favorite cadence on all of UA-cam.
You also don't have one more negative number than positive for a given number of bits/trits.
Interesting. Now I know about 3 base 3 types.. this, binary for classic computers and quantum superposition.
except for... binary is not base 3, and quantum superposition is also not base 3?
@@deinauge7894 good call on binary.. I guess I was focusing on computer logic and not thinking correctly. As for quantum superposition, I was under the assumption there was an on position, an off position and a both on and off position. I guess I'll have to look into it more to see exactly how many positions it has.
@@MatthewConlisk the number of superposition states is infinite. you can imagine it as an arrow that can point in any space direction (and an additional phase that is similar to the starting angle when you imagine the arrow rotating around its axis). I know this picture has its flaws - but it is as close as it gets.
So a superposition is not just "both on and off" but any direction that is not exactly upward or downward. And the picture i gave also clarifies that it depends on your point of view what "on" and "off" mean, and which states are pure (not superposition) states...
@@deinauge7894 I hadn't thought about it that deeply. Thank you for explaining it in a way that I can visualize.
Wonderful video!
The dominance of binary isn't due to manufacturing quantity, but rather the underlying physical technology being used to represent values. If you were to make a chip that uses balanced trinary, it would store two bits and not use one of the possible combinations. Flip flops have two states. DRAM was designed and matured to match that. Logic gates are built out of semiconductor transistors that have two states. Could you start with a semiconductor switch that could be off, positive voltage, or negative voltage? I don't know. But it would be more complex to have to carry another power bus everywhere.
been a while since i watched one of your vids. but dang are they fun and educational.
Yoo I was asking for this topic a while ago, thanks for doing it!
I wish you had drawn a tree diagram of how to represent numbers in balanced trinary, because it's a lot more intuitive than it looks, the point where you jump up one digit is just in the middle of the base. I also missed the fun fact that the automatic rounding results in (negative powers) of two having no finite representation, ending in either 11111... or TTTTT... which gives us an unusual insight into why 0.999... is equivalent to 1.000... in our decimal system.
@@CjqNslXUcM I almost included that fun fact about the number 1/2 doing that trait you mentioned, but I saved it because I'm going to make a whole episode sometime in the future about that type of topic (certain numbers having multiple representations, and which ones might have that in different bases)
@@ComboClass i claimed it was negative powers of two that cause this double representation, howerer that is wrong. I think the number needs to have a prime factorisation such that 2^(-1) × 3^(x) × and_other_factors(y), where x is any integer and y is any natural number.
Two trits can represent a digit in balanced base nine. I advocate this for human use. It is close to 10, so not very foreign to our ways of thinking, in terms of place value. It would eliminate pricing that ends with a row of nines, like nine dollars and ninety-nine cents, plus would bring all the other advantages of balanced numerals, including that truncating is the same as rounding off.
prices ending in 99 cents is actually a marketing trick, not an accident. people are more likely to buy something that's $9.99 than $10.00
@@wyboo2019 Of course it is a marketing trick. I'm saying that changing the number system in this way would substantially defeat that trick.
But then people would use 9.88@@b43xoit
Would using Troolean logic enable efficient problem-solving? If 1 were True, -1 were False, and 0 were Maybe?
Binary logic gives 16 boolean operations. Ternary logic gives 19,683 boolean operations.
@@gljames24 But those aren't necessarily all useful. W'pedia shows tables called "^" and "v", but doesn't explain how they correspond to the same-named binary operators for bits. I suppose they are gates from which an ALU can be built.
Fun Fact: Ternary logic has 19,683 boolean operations compared with binary's 16.
Oh wow yeah x^x^2
I think you mean binary operations (here "binary" refers to having two arguments, regardless of the number of values those arguments may take). Boolean logic by definition has only two truth values, and boolean operations are operations of boolean logic.
That poor binary laptop obviously wasn't up to the balanced ternary game.
This was really interesting, far more interesting than binary and hexadecimal. During a brief time I spent learning how to program a particular kind of PLC I had to learn about BCD. Is there any interesting math about BCD? (because aside from the difference in character length and storage methods within the computer it is just base-10 expressed another way.)
binary has 16 logical operators, ternary has hundreds - each one of these might express concepts and algorithms that would otherwise be complex, hard to think and speak about using base 10 and binary. a few people's intuition point to some of them being advantageous in computing.
Hi Combo!
This is cool!
You must have pretty good insurance to cover workplace injury with the constant ticking time bomb going on
Enjoyed the vid, but had a question. This doesn’t seem scalable if you follow Moore’s principle. How would you account for the insulation from interference between transistors with the added need to read charges? It seems this has a small niche use case for now unless technology develops a solution.
Love the content and hope my question makes sense!
Seriously, this dude is like the Emo Phillips of Math.
I can´t wait for his video about earthquakes and sismographs.
A cash system which uses negative coins. What could possibly go wrong?
You wouldn't have to include negative coins to get the benefits of pricing using balanced base nine. Under the current regime with decimal, marketers love to price products and services at a point where truncating is a lot different from rounding off. For example, if a doughnut or something is priced at $2.99, the mind of the prospective buyer says, "wow, that's only about two dollars". Balanced numerals would take that marketing ploy out of the toolbox and so buyers would have it easier in understanding prices.
@@b43xoit it's actually to force employees to open the cash register in order to track whenever a cash purchase is made, making it harder for an employee to just pocket the bill for themselves.
There was 1 computer build in Russia that used trinary, ie 0,1,2. I could accurately represent 0.1, but was less efficient.
W'pedia says the binary successor to the USSR's balanced-trit computer performed about as well, but cost more.
FYI, that computer deserved all of that abuse, if not more.. 😂
Hi would finish the class if a fire broke out. Wait... I think I watched that episode.
RIP macbook
o7
Just returned to it's natural habitat, sleeping with the fishes.
Oversimplification: Balanced ternary equals Roman numerals meets base 3.
Just imagine a computer on balance ternary, just want to play with one.
One could simulate it in software, for purposes of playing.
@b43xoit yeah, I'm going to look into it. It's just that I have so many incomplete side projects; I should not just make another one.
The classroom is slowly evolving again!!
Hello sir, your classroom is falling apart.
Kind regards, the squirrel.
The big reason we use binary in computers is that turning transistors on or off is really easy. If you want to have more states, there are a lot of ways that the real world starts getting in your way really quickly. Probably the most problematic is that since power is voltage * current, it's reasonably easy to keep power dissipation near 0 as long as you keep either voltage or current near zero. Once you start setting up states where both of them are significantly above zero, your transistor starts heating up... lots.
balanced ternary requires positive voltage, negative voltage, and 0 voltage. from an electronics perspective, that's still digital, no need to mess with analogue voltages, ADCs and DACs. although, AC is balanced ternary if the magnitude is considered the positive and negative unit, this might also be useful.
I’m assuming balanced ternary hardware would use positive and negative voltages.
What computing applications are balanced ternary better at? In terms of software, etc
Might cut down on the average heat dissipation for doing typical arithmetic on numbers.
y’all gotta mic up domotro so he doesn’t have to shout at the camera anymore
where did you get a tekronix cooler??? lol that's awesome. i'm guessing is convention swag?
I don’t know honestly, I got it from my family who had it from a while ago
Would an actual physical currency in balanced tertiary be a problem because people would conveniently keep losing their negative coins?
You don’t have negative coins. The coins would be powers of three. For each type of coin, +1 is one person giving the other one of the coins and -1 is the opposite person giving one of the coins, and 0 is neither. These can combine with the powers of three to be + or - any integer
@@ComboClass Makes sense! I don't know why I was thinking of it this way. Could negative coins even be a thing? Thanks for the explanation and response. Your videos are very entertaining and your explanations are extremely well thought out.
@@zeitgeistcowboy Thanks! And theoretically negative coins could describe debt, but like you said, if people had their own negative coins they could just throw them away or "lose" them haha. If you count digital currently, then owing a bank money is sort of like negative coins
@@zeitgeistcowboy Negatives are just change
what's really interesting is that most prominent monetary systems have mixed bases. Pennies, Dimes, Dollars, Tens, and Bens, work on base ten, but what the heck are Nickles, Quarters, Fins, Jacksons, and Fifties doing in the mix?! Not to mention 50cent pieces...I mean it all makes sense from a practical stand point, but it's never been very mathematical.
In the Setun computer, what were the physical manifestations of the trits? ChatGPT gives an answer, but it is not reliable.
I see. It’s not so unusual. Roman numerals used something similar to represent their 4s and 9s. This is the same concept but applied to the smallest base that allows it.
8:27 huh, that "1" symbol looks a lot like the "spin-up" symbol in quantum mechanics
8:37 oh lol I see what you did there
Cute, like paired electrons.
If a balanced ternary for decimals means it's the same as rounding in your regular base 10, does that mean ternary doesn't offer any benefit when it comes to expressing floating points? The biggest problem for someone like me in binary is the fact that we have floating point errors in single precision crop up pretty quickly. Double precision is better but the inherent binary limitation on expressing it still exists, it just gets pushed further out. I'm terrible at maths so would appreciate further insight.
It reminds me of Roman numerals, where some digits are subtracting
I very much enjoy your math content.
But your running gag of dropping and breaking stuff keeps bugging me. When you put up the computer in this one my immediate reaction was a toe-curling "oh no". And rightly so.
I am not complaining. Just sayin...
Thank you!
No functional computers were harmed in the making of this episode
@@ComboClass I thought so. And even if, you are perfectly entitled to breaking your own stuff.
I have a friend who keeps her tablet in a case that makes it unbreakable. And she regularly throws it across rooms to make a point. So far the claim has held true. But I just can't cope :-).
Dude the dropping and breaking stuff makes the channel
It seems fitting that it fell into a Tektronix branded cooler. Why is there a Tektronix branded cooler?
I don’t know anything about that brand. You’re the second comment which recognized it which surprised me. I got the cooler from my family who had it from a while ago
1,2,2,5,10, 10, 30,50
Trigger Warning: Senseless destruction and no fucks given
I'm not seeing how truncating is the same as rounding...
If the digit I'm cutting off is 1, don't I need to make the lowest place I'm keeping "more positive"? T->0 0->1 1->0+carry ?
And if it's T, the next place needs to be made more negative.
Not at all because all digits are the same distance from each other, T is one unit from 0, 0 is one unit from 1 and 1 is one unit away from T
0.TT0011 -> 0.TT001
vs
0.110111 -> 0.11011 (trunc.) vs 0.11100 (round.)
In binary 1.1 (1½) is exactly in-between 1.0 (1) and 10.0 (2) but in balanced ternary 1.1 (1⅓) is closer to 1.0 (1) than to 1T.0 (2); and 1T.T (2-⅓=1⅔) is closer to 1T.0 (2) than to 1.0 (1) so rounding is the same as truncating
1 less than multiple of 3: under number
Multiple of 3: just number
1 more than multiple of 3: over number
Theres a thin line between genius and insanity
When I was at a Computer Museum, I was "designing" a balanced ternary computer. I got around to deciding that I'd have a 27 trit instruction size. With 9 trit trytes. For boolean functions I thought of shifts of trits, or 3-trits, or trytes. Hmm.. maybe we should call them tits so 3-tits to a trit? And masking: + passes whatever, 0 results in 0, and - would negate whatever. Probably have 9 working registers too. That's as far as I got -- thank you Covid.
Programmers would probably use balanced base 9 to write down pairs of trit values.
@@b43xoit That's a good question. I'd think a ternary person would want to still think in groups of 3. So, 27 states would be quite a trick for a single symbol. And the reason to group the trits is to use less symbols. I prefer the + 0 - set. I suppose D C B A 0 a b c d could work for two trits, or W X Y Z at the end to be more obvious.
0:20 that poor macbook 😭
Well hello there Domotro