Do you think I can ever get "numberation" in the dictionary (after I get "threeven" in there)? In any case, thanks for watching! Check the description for more info and timestamps of different chapters.
"even" and "mod 2" have the same number of syllables "threeven" and "mod 3" have the same number of syllables i think we got it im gonna have my "sixven" numbers
@@GamingKing-jo9py "Mod 3" is not in the dictionary either. I'll be okay with "mod 3" getting added if "threeven" can't, but threeven is catchier to the general public
Portmanteau of number and operation. I will look for opportunities to use it in my everyday speech. Most of my day job is programming microcontrollers, so that will be more often than most.
@@ComboClass oh i was meaning that the two words "mod three" can be said just as quick as "threeven", so it's not really a problem for me in speed. but yes i understand the sentiment, it would be amazing prestige to have a single word as "threeven".
If you mean the fire flashback near the beginning, that was genuinely an accident haha. If you mean the fire moment near the end, yeah that was for mathematics/philosophy/comedy (apart from when I burnt my hand on the clock in the last second, which was also an accident)
You are so underrated! Even though your style is one of the most unique and still youtube doesn't recommend your videos. I had to manually search your videos even I subscribed you over a year ago
Fun fact: the double-and-add algorithm you described in this video is actually used all the time in cryptography! Specifically, it's used to efficiently compute elliptic curve point multiplication and its cousin, the square-and-multiply algorithm, is used in RSA to efficiently exponentiate numbers modulo another number.
for your efficiency question, I would look into "Hamming Weight". Also just a fun fact I discovered looking into this myself: if you have (x*2)+1+1, you can always rewrite it as (x+1)*2, leading to a nice recursive way to find the minimum number of moves.
Yeah that recursion is a cool phrasing. And Hamming Weight is similar to the binary questions, but doesn't fully tell us how the "efficiency" compares to other numberations/bases (we'd need to incorporate other concepts such as radix economy, integer complexity, and who knows what else).
I was planning on leaving halfway through the video because I could see the solution, but I'm so glad I stayed because that was so incredibly beautiful, how universal those charts were across all bases
Great vid! Reminds me of something I discovered: if you extended the idea of numberations to just functions in general: imagine starting at 1 and being able to multiply by 2 whenever, and being able to subtract 1 then divide by 3 when the result is an odd number. Asking whether or not all positive integers can be reached is equivalent to the collatz conjecture.
Happy Holidays! This also reminds me of the Collatz conjecture where I found (a+2)(2^(n-1))-2 becomes 9^ (upper bound ((n-1)/2)) *(9a+4.75) -0.5 , where is greater than or equal to 3 and a is an odd prime which includes -1.
This is really cool, I personally have been working on an encoder where ”i” is the index and “r” is the reducer value; it takes a binary value and adds it to the reducer, then we multiply the reducer by “i” + 1 This would allow you to then reverse the operator to get “rv” where “v” is the value (0/1) and “r” being that layer reducement. Very neat work, very cool to learn about!
Found your channel again recently and have been blown away by your base videos, the imaginary number based are my favorites. Stay safe, no more catching yourself on fire!
18:25 That's one of my favourite experiences in my job as a programmer. ...And when watching maths youtubers. I'm not particularly good at maths in practice (except those problems that I can force into being a programming problem, or which I can think of in a completely different way like music or art or whatever) but I do enjoy noticing patterns and understanding proofs even if I'm not very good at making the proofs myself.
This is the first video of yours i've seen and while im definitely a bit confused you definitely sound like you know what you're talking about and this sounds super interesting! Good work! :)
Cool video! I figured it out pretty quickly once I saw that it was related to different bases. If given any number of /2's and at most one consecutive +1, I imagine it's possible to represent any nonnegative real number less than 2. Also, I wonder how this idea might be connected to the Collatz conjecture? The 3x+1 thing would of course require two different numberations (*3 followed immediately by +1), but it does make me wonder if one could come up with some sort of Collatz conjecture numbering system.
there is a visualization for categories in category theory where you draw "objects" and "arrows" just like in your visualizations. now lets take the category of integers as objects and the arrows corresponding to the numberations +1, -1 and ×3 for each object, and also use the functors P: x → x+1, N: x → x-1, and T: x → 3×x. these functors are just like the numberations we're doing. if we follow 0 after applying these functors, we can see what number it will make. now here's the real power take the category of strings containing, lets say '0', '1' and 'T' with the arrows and functors corresponding to concatenating '0', '1' or 'T' instead of the numberations, and we find that they're actually the same. this would be like finally seeing the same diagram you made, but in balanced ternary.
This reminds me of something I found/learned a while back (though I haven’t technically proven this): starting with 1, and allowing multiplication and the “nth prime” operator (so P(1) = 2, P(2) = 3, P(3) = 5, P(4) = 7, P(5) = 11, and so on), we can reach all positive integers: 2 is P(1), 3 is P(2) = P(P(1)), 4 is 2 * 2 = P(1) * P(1), 5 is P(3) = P(P(P(1))), 6 is 2 * 3 = P(1) * P(P(1)), 7 is P(4) = P(P(1) * P(1)), and so on. You can get all nonzero integers this way if you start at -1 instead of 1, though I’m not sure what the “-1st” prime is (though I think it’s fair to say that 1 is the “0th prime”, so if you start with 0 then P(0) = 1 and you get all nonnegative integers)
Regarding your open question (+1 costs 1, xN costs N), you'll arrive at what is called the Mahler-Popken complexity of the number. It is sequence A005245 of the online encyclopedia of integer sequences. Wikipedia also has a page titled "integer complexity". I learned something too; nice idea! 👍
Yeah! Integer complexity is something I have taken tons of notes about in the past and plan to make a long episode about it before long, and while the question I had is slightly different (due to being a singular chain as opposed to showing parentheses) it is very related!
@@ComboClasswhat if you instead used three numberations *2, -1 and /3? Can you reach all the natural numbers starting out from 1? All the integers? What kind of number system would help to answer these questions? How the answers change if you can't use -1 twice in a row, only after using a *2?
I spot 2 things you can look at if you like. 1. The width of each row, follows the Fibonacci numbers. Row 1 and 2 are 1. Then you get 3, 5, 8 etc. I am sure you have noticed that too. 2. When you compare x2 with +1 to the x3 with +1. The same spots produce a prime number. Well, this was on your sheets. If this holds true further down the line. You could produce a prime number by simply changing the multiplication. Although, you wouldn't know which prime number it would be. (1st is 2, 2nd is 3, 3rd is 5, 4th is 7 etc.) Maybe there is a patern in the size, in the locations, rows producing X prime numbers. And this in combination with the Fibonacci numbers.
Have you looked into divisibility tricks in different bases? I wonder if base 10 is actually quite good compared to other bases: there's nice divisibility tricks for 2, 3, 4, 5, 6, and 9 (7 and 8 are not so nice).
I am working on a JavaScript library that involves cool stuff with bases (and prime numbers). I think I might even extend it to allow (for example) accurate computations of square roots.
This idea of "numberations" and rules for their application leads directly into the Collatz conjecture if you apply the rules in reverse. For example, instead of picking a starting number and saying "if it's odd, multiply by 3 and add 1. Otherwise, divide by 2", you could start at 1 (just to avoid needing the +1 from 0), then apply the numberation ×2 infinitely, and the numberation -1 and ÷3 (with the rule that they always occur together, in that order, and can never occur more than one time consecutively. Your current number must, of course, also congruent to 1 (mod 3) for it to work). If you can show that every number can be reached like this, that's the Collatz Conjecture
I was experimenting with this a few years ago, you get some interesting patterns when you consider number of elements in each row (for the x2 version it's Fibonacci), as well as number of steps to reach each number (for standard bases it's digit sum plus digit count). But for me the most interesting stuff happens when you can only do one of any operation in a row and you limit yourself to, for example, +1, x2, x3. The numbers that can and can't be made this way are a lot less intuitive.
This is crazy, I've been playing around with something very similar, independently. With x2/+1, it's cool that every row count is a Fibonacci number. Also, if you look at the "6 5 8" row there, you're cutting off the "+1" branch of the number 3, because it's already appeared. So in essence, every number's "+1" branch continues until it's been cut off by the fact that it's already appeared in another branch, which also means that you can reach every number in the case of +1/x2. This happens to overlap with your rule of "no consecutive numberations of the same kind", probably. It also shows how Fibonacci numbers can be added up to make up powers of 2, if you consider the "phantom branches" that would continue to double but don't because of your rules, and how if the doubling continued, each row would have 2ˆn elements (the number of missing elements will also be made up of Fibonacci numbers). I've also found a cool way of visualising it, where the "+1" operation is just a spiralling number line, and then the connecting lines are the +2's - makes it even more Fibonacci-ish. I have a picture somewhere, too.
@ComboClass -- enjoy your videos. I would LOVE to see you do a presentation on p-adic numbers! I want to understand them more intuitively but most explanations out there leave something lacking....
The "double or add 1" version was used in an old game called "Wizkid" as kind of a cheat code. You would walk through different doors to reach your target number
I'm thinking that if you to iterate these bracket graphs out a few dozen times each, you'd wind up drawing interesting (and possibly unique to each one) fractal maps for them. I have a suspicion that the strong and weak sides of the resulting graphs might likely settle upon a ratio that would approximate the Fibonacci constant -- or, maybe e instead
I thought of a unique visualization for the trees involving additions and multiplications. Start moving in a line for each addition operation. Then when you get to a multiplication, begin moving in a new spatial dimension. The numbers will organize themselves into hypercubes of progressively more dimensions!
I don’t know, but I’m certainly down if Brady ever wants! (And I know he travels to film with some Berkeley CA mathematicians sometimes and I live in that area)
0:14 and 21:16 I've been wondering how you can stumble around in all that broken stuff and play with fire so seemingly nonchalantly without injuring yourself a lot... but I guess the trick is you _do_ injure yourself a lot 😅
You need to be able to do +2 with the 3 tree because any number modulo 3 is either 0, 1, or 2. Same goes for any N tree, where you need to be able to +(N-1), +(N-2), ... +1, +0 because those are all the possibilities of a number modulo N.
Here's a fun thing to notice. Look back at those numberation trees and count the number of elements in each row. Go far enough and you'll start seeing a familiar pattern.
Initially I wanted to find the smallest non zero value you could reach for every finite number of moves taken, but now I realize that's always going to be found by using +1 every time you're allowed, since that grows the slowest. I like the idea of small values that are computationally expensive to represent.
Oh, actually this might be a real question for the case of balanced ternary. At least, I can't solve it in my head in the time it takes me to type a comment.
Might be able to find the next in the sequence: 2, 3, 4, 82000, ? (Smalllest non-0 non-1 number that is made of only 0s and 1s in all the first n bases)
I undecimalized the Decimal system to create the Sub-Decimal System if you wanted to see some real gnarly number patterns. (Part of it is a Base 13 system that increases at multiples of 17)
computerphile did a video on the square multiply algorithm, converting a number to binary and if its a 0 you square and if its a 1 you square and multiply, excluding the starting 1 which is just listing the original number
its like an abacus. if each level only has 1 bead, you count in binary. 2 beads, you count in trinary? 9 beads, you count in decimal. all that matters is the process. add +1 shift a bead or if no beads, reset line and add +1 to the line below all numbers are touched because we're counting by 1, all numbers are unique because we have a process for labeling each next number that doesn't repeat. i realized why every base touches every number because of this video.
What if we loosened the conditions a bit, and we have "numberations" that can, for example, only be used on even, or only odd, numbers, and we picked something like say... if a number is odd you multiply by 3 and add 1 to it, just so it's even again, and if the number is even you halve it... I wonder if there's anything interesting about this kind of pattern 🤔
Here's my reasoning about how it's the same as binary before watching the rest of the video: It's kind of like you're printing a binary number using a typewriter with empty spaces being zeroes. Your options are to move the page one space (which doubles the number) or to type a 1. And then if you've already typed a 1 you have to move the page next. Therefore you can print any binary number.
Let's analyze the representations complexities. Given a natural number k and a natural base n, we ask about the efficiency if 1) we meet a best case 2) we meet a worst case 3) we get an expected value Case 1) In the first case we need log_n(k) + 1 steps. Thus we are [k]/[log_n(k) + 1] times better than simple succession. The bigger the base n, the better it gets. Case 2) We want to reach a number t one less than the best case. It has log_n(t + 1) digits. We need to add one t-1 times each time, leaving us with [log_n(t + 1)*(t-1)]/t being asymptotically equal to log_n(t + 1). Again: the higher the base the less steps you need to represent a number. Though while this case gets worse and worse, the best case consistently stays arbitrarily small for any base. Case 3) Given a number p, we estimate to use a multiplication by n to use in every n-th step. So with d steps we can reach a number n^(d/n)*(n/2) or something of that spirit.... This is at least the approach that came in mind when thinking about it
It gets really interesting when we weight the multiplications by n: Consider the "best cases" n = 2, k = 1024 and n = 10, k = 1000: n = 2 requires 10 multiplications at cost 2 (+1 increment) and n = 10 needs 3 multiplications at cost 10 (+1). So the smaller base performs better here. Generally the cost will be C(n,k) = n*log_n(k)+1 When comparing two bases it is C(n,k) < C(n',k) => n*log_n(k) < n'*log_n'(k) log_n(k)/log_n'(k) < n'/n But log_n(k)/log_n'(k) is constant with regard to k (and equal to log_n(n') ) So the final condition is log_n(n') < n'/n or log_n(n') - n'/n < 0 Now holding n fixed and solving for a maximal n' (a function value greater than 0 meaning n' is a better base than n) and repeating the process with that better value eventually yields n = e as a base that cannot be improved upon. Why is it always e... Anyhow, the best natural base is 3 in this case.
What you have to say and the way you say it keeps me coming back for more. I get caught up in what you are saying, but then something falls or a fire starts, and it pulls me out of it. it's distracting. You don't need the slapstick humor to draw people in. I would suggest that you keep everything else as-is, the backyard, the clocks and other props, feeding the squirrel, etc, but lose the slapstick. I like comedy, including slapstick, but it just doesn't fit here.
Do you think I can ever get "numberation" in the dictionary (after I get "threeven" in there)? In any case, thanks for watching! Check the description for more info and timestamps of different chapters.
Numberatiles
"even" and "mod 2" have the same number of syllables
"threeven" and "mod 3" have the same number of syllables
i think we got it
im gonna have my "sixven" numbers
@@GamingKing-jo9py "Mod 3" is not in the dictionary either. I'll be okay with "mod 3" getting added if "threeven" can't, but threeven is catchier to the general public
Portmanteau of number and operation. I will look for opportunities to use it in my everyday speech. Most of my day job is programming microcontrollers, so that will be more often than most.
@@ComboClass oh i was meaning that the two words "mod three" can be said just as quick as "threeven", so it's not really a problem for me in speed. but yes i understand the sentiment, it would be amazing prestige to have a single word as "threeven".
Man caught himself on fire for Math.
That's what happens when you try to run a backyard math lab
You know, I suspect it may not have been necessary for the mathematical aspect at all, and he only did it for comedy
Correction: man caught himself on fire; also, math.
Screw for science. Do things for math!
If you mean the fire flashback near the beginning, that was genuinely an accident haha. If you mean the fire moment near the end, yeah that was for mathematics/philosophy/comedy (apart from when I burnt my hand on the clock in the last second, which was also an accident)
You are so underrated!
Even though your style is one of the most unique and still youtube doesn't recommend your videos.
I had to manually search your videos even I subscribed you over a year ago
You're a math channel and yet. Somehow i'm entirely not surprised that you set yourself on fire
Fun fact: the double-and-add algorithm you described in this video is actually used all the time in cryptography! Specifically, it's used to efficiently compute elliptic curve point multiplication and its cousin, the square-and-multiply algorithm, is used in RSA to efficiently exponentiate numbers modulo another number.
for your efficiency question, I would look into "Hamming Weight". Also just a fun fact I discovered looking into this myself: if you have (x*2)+1+1, you can always rewrite it as (x+1)*2, leading to a nice recursive way to find the minimum number of moves.
Yeah that recursion is a cool phrasing. And Hamming Weight is similar to the binary questions, but doesn't fully tell us how the "efficiency" compares to other numberations/bases (we'd need to incorporate other concepts such as radix economy, integer complexity, and who knows what else).
i'd guess getting from 1 to the desired number by multiplying its prime factors seems to be the most efficient way
I was planning on leaving halfway through the video because I could see the solution, but I'm so glad I stayed because that was so incredibly beautiful, how universal those charts were across all bases
Great vid! Reminds me of something I discovered: if you extended the idea of numberations to just functions in general: imagine starting at 1 and being able to multiply by 2 whenever, and being able to subtract 1 then divide by 3 when the result is an odd number. Asking whether or not all positive integers can be reached is equivalent to the collatz conjecture.
Happy Holidays!
This also reminds me of the Collatz conjecture where I found (a+2)(2^(n-1))-2 becomes 9^ (upper bound ((n-1)/2)) *(9a+4.75) -0.5 , where is greater than or equal to 3 and a is an odd prime which includes -1.
Reminds me of the Collatz Conjecture but in reverse.
0:25 Only Domotro is strong enough to be on fire and not notice it
Setting junk on fire really enhances the performance
This is really cool, I personally have been working on an encoder where ”i” is the index and “r” is the reducer value; it takes a binary value and adds it to the reducer, then we multiply the reducer by “i” + 1
This would allow you to then reverse the operator to get “rv” where “v” is the value (0/1) and “r” being that layer reducement.
Very neat work, very cool to learn about!
Found your channel again recently and have been blown away by your base videos, the imaginary number based are my favorites. Stay safe, no more catching yourself on fire!
The numberations remind me of generators from group theory. The tree diagrams remind me of bit shifting binary strings and incrementing them.
18:25 That's one of my favourite experiences in my job as a programmer. ...And when watching maths youtubers. I'm not particularly good at maths in practice (except those problems that I can force into being a programming problem, or which I can think of in a completely different way like music or art or whatever) but I do enjoy noticing patterns and understanding proofs even if I'm not very good at making the proofs myself.
I love it. I'm glad I found your channel! Happy Holidays!
This is the first video of yours i've seen and while im definitely a bit confused you definitely sound like you know what you're talking about and this sounds super interesting! Good work! :)
Some clever compilers do this when multiplying a variable with a constant, say
x = i * 129 same as x = i
Cool video! I figured it out pretty quickly once I saw that it was related to different bases. If given any number of /2's and at most one consecutive +1, I imagine it's possible to represent any nonnegative real number less than 2.
Also, I wonder how this idea might be connected to the Collatz conjecture? The 3x+1 thing would of course require two different numberations (*3 followed immediately by +1), but it does make me wonder if one could come up with some sort of Collatz conjecture numbering system.
Merry Christmas 🎄 I love your videos. They help me learn new patterns for use in acupuncture treatments
Bynusing the numberations:
+1, ×2, ÷2, -1
You could make every integer and even fraction (and even more)
Yeah I had the thought that improving the operators makes it possible to reach different sets of numbers, like rationals, reals, etc
How could you reach 2/6 ?
@@lyrimetacurl0 the solution would take an infinite amount of time, since 2/6 (or ⅓) has an infinite amount of digits.
there is a visualization for categories in category theory where you draw "objects" and "arrows" just like in your visualizations. now lets take the category of integers as objects and the arrows corresponding to the numberations +1, -1 and ×3 for each object, and also use the functors P: x → x+1, N: x → x-1, and T: x → 3×x. these functors are just like the numberations we're doing. if we follow 0 after applying these functors, we can see what number it will make.
now here's the real power
take the category of strings containing, lets say '0', '1' and 'T' with the arrows and functors corresponding to concatenating '0', '1' or 'T' instead of the numberations, and we find that they're actually the same.
this would be like finally seeing the same diagram you made, but in balanced ternary.
category theory is cool!
Definitely worth exploring more, I might try doing something with it myself.
Merry Christmas man .
Damn man, I really want to spend more time offline just thinking about math and stuff and you are a serious inspiration for that❤
This reminds me of something I found/learned a while back (though I haven’t technically proven this): starting with 1, and allowing multiplication and the “nth prime” operator (so P(1) = 2, P(2) = 3, P(3) = 5, P(4) = 7, P(5) = 11, and so on), we can reach all positive integers: 2 is P(1), 3 is P(2) = P(P(1)), 4 is 2 * 2 = P(1) * P(1), 5 is P(3) = P(P(P(1))), 6 is 2 * 3 = P(1) * P(P(1)), 7 is P(4) = P(P(1) * P(1)), and so on. You can get all nonzero integers this way if you start at -1 instead of 1, though I’m not sure what the “-1st” prime is (though I think it’s fair to say that 1 is the “0th prime”, so if you start with 0 then P(0) = 1 and you get all nonnegative integers)
-1 is the only negative prime number
It can be proven using induction with the fact that the nth prime number is always larger than n and prime factorization
Regarding your open question (+1 costs 1, xN costs N), you'll arrive at what is called the Mahler-Popken complexity of the number.
It is sequence A005245 of the online encyclopedia of integer sequences.
Wikipedia also has a page titled "integer complexity".
I learned something too; nice idea! 👍
Ehm, addendum: not exactly the same, I now realize, but still very close and a useful reference. 😉
Yeah! Integer complexity is something I have taken tons of notes about in the past and plan to make a long episode about it before long, and while the question I had is slightly different (due to being a singular chain as opposed to showing parentheses) it is very related!
@@ComboClasswhat if you instead used three numberations *2, -1 and /3? Can you reach all the natural numbers starting out from 1? All the integers? What kind of number system would help to answer these questions? How the answers change if you can't use -1 twice in a row, only after using a *2?
@@HoSza1*cough* Collatz *cough*
@@wyattstevens8574 no spoilers please XD
That fact that someone can be on fire for so long without noticing is kind of unnerving
I spot 2 things you can look at if you like.
1.
The width of each row, follows the Fibonacci numbers.
Row 1 and 2 are 1. Then you get 3, 5, 8 etc.
I am sure you have noticed that too.
2.
When you compare x2 with +1 to the x3 with +1. The same spots produce a prime number. Well, this was on your sheets. If this holds true further down the line. You could produce a prime number by simply changing the multiplication. Although, you wouldn't know which prime number it would be. (1st is 2, 2nd is 3, 3rd is 5, 4th is 7 etc.)
Maybe there is a patern in the size, in the locations, rows producing X prime numbers. And this in combination with the Fibonacci numbers.
I hope you aren't burned from last nights stream.
Have you looked into divisibility tricks in different bases? I wonder if base 10 is actually quite good compared to other bases: there's nice divisibility tricks for 2, 3, 4, 5, 6, and 9 (7 and 8 are not so nice).
Interesting thoughts. So, would *any* possible combination of number operations ever reach any single transcendental number value?
Not a finite amount of algebraic operations, but possibly if you have more obscure operations or if you allow infinite amounts
I am working on a JavaScript library that involves cool stuff with bases (and prime numbers). I think I might even extend it to allow (for example) accurate computations of square roots.
Would it help your analysis to write each numberation combo as a FSA?
This idea of "numberations" and rules for their application leads directly into the Collatz conjecture if you apply the rules in reverse. For example, instead of picking a starting number and saying "if it's odd, multiply by 3 and add 1. Otherwise, divide by 2", you could start at 1 (just to avoid needing the +1 from 0), then apply the numberation ×2 infinitely, and the numberation -1 and ÷3 (with the rule that they always occur together, in that order, and can never occur more than one time consecutively. Your current number must, of course, also congruent to 1 (mod 3) for it to work). If you can show that every number can be reached like this, that's the Collatz Conjecture
I was experimenting with this a few years ago, you get some interesting patterns when you consider number of elements in each row (for the x2 version it's Fibonacci), as well as number of steps to reach each number (for standard bases it's digit sum plus digit count). But for me the most interesting stuff happens when you can only do one of any operation in a row and you limit yourself to, for example, +1, x2, x3. The numbers that can and can't be made this way are a lot less intuitive.
This is crazy, I've been playing around with something very similar, independently. With x2/+1, it's cool that every row count is a Fibonacci number. Also, if you look at the "6 5 8" row there, you're cutting off the "+1" branch of the number 3, because it's already appeared. So in essence, every number's "+1" branch continues until it's been cut off by the fact that it's already appeared in another branch, which also means that you can reach every number in the case of +1/x2. This happens to overlap with your rule of "no consecutive numberations of the same kind", probably. It also shows how Fibonacci numbers can be added up to make up powers of 2, if you consider the "phantom branches" that would continue to double but don't because of your rules, and how if the doubling continued, each row would have 2ˆn elements (the number of missing elements will also be made up of Fibonacci numbers). I've also found a cool way of visualising it, where the "+1" operation is just a spiralling number line, and then the connecting lines are the +2's - makes it even more Fibonacci-ish. I have a picture somewhere, too.
@ComboClass -- enjoy your videos. I would LOVE to see you do a presentation on p-adic numbers! I want to understand them more intuitively but most explanations out there leave something lacking....
great vid, loved it man
The "double or add 1" version was used in an old game called "Wizkid" as kind of a cheat code. You would walk through different doors to reach your target number
I'm thinking that if you to iterate these bracket graphs out a few dozen times each, you'd wind up drawing interesting (and possibly unique to each one) fractal maps for them. I have a suspicion that the strong and weak sides of the resulting graphs might likely settle upon a ratio that would approximate the Fibonacci constant -- or, maybe e instead
I thought of a unique visualization for the trees involving additions and multiplications. Start moving in a line for each addition operation. Then when you get to a multiplication, begin moving in a new spatial dimension. The numbers will organize themselves into hypercubes of progressively more dimensions!
When do you think you will get featured on Numberphile?
I don’t know, but I’m certainly down if Brady ever wants! (And I know he travels to film with some Berkeley CA mathematicians sometimes and I live in that area)
I like the game maybe try the set (div by 2 and mult 3+1) ^^ good luck!
0:14 and 21:16 I've been wondering how you can stumble around in all that broken stuff and play with fire so seemingly nonchalantly without injuring yourself a lot... but I guess the trick is you _do_ injure yourself a lot 😅
For “numberations” involving certain operations you might not stumble directly on a number but you might find a sequence of numbers that approaches it
You need to be able to do +2 with the 3 tree because any number modulo 3 is either 0, 1, or 2.
Same goes for any N tree, where you need to be able to +(N-1), +(N-2), ... +1, +0 because those are all the possibilities of a number modulo N.
Balanced Ternary mentioned!!
Here's a fun thing to notice. Look back at those numberation trees and count the number of elements in each row. Go far enough and you'll start seeing a familiar pattern.
Initially I wanted to find the smallest non zero value you could reach for every finite number of moves taken, but now I realize that's always going to be found by using +1 every time you're allowed, since that grows the slowest.
I like the idea of small values that are computationally expensive to represent.
Oh, actually this might be a real question for the case of balanced ternary. At least, I can't solve it in my head in the time it takes me to type a comment.
Might be able to find the next in the sequence: 2, 3, 4, 82000, ? (Smalllest non-0 non-1 number that is made of only 0s and 1s in all the first n bases)
But what if those nuberations are "multiply by 2" and "subtract 1 and divide by 3")
I undecimalized the Decimal system to create the Sub-Decimal System if you wanted to see some real gnarly number patterns. (Part of it is a Base 13 system that increases at multiples of 17)
Super interested in your card game
This man's arsonic tendencies are the reason we have a 20C (68F) winter
Merry christmas
Currently also in my Watch Later: "Avoid the Collatz Conjecture at all costs!"
computerphile did a video on the square multiply algorithm, converting a number to binary and if its a 0 you square and if its a 1 you square and multiply, excluding the starting 1 which is just listing the original number
x 2 = shift bits over by 1; + 1 = set current bit to 1 (from 0)
its like an abacus. if each level only has 1 bead, you count in binary. 2 beads, you count in trinary? 9 beads, you count in decimal.
all that matters is the process.
add +1
shift a bead
or
if no beads, reset line and add +1 to the line below
all numbers are touched because we're counting by 1, all numbers are unique because we have a process for labeling each next number that doesn't repeat.
i realized why every base touches every number because of this video.
not every base, base 2, base 3, base 12, base 24 all yes. base Fibonacci will take more thought.
What if we loosened the conditions a bit, and we have "numberations" that can, for example, only be used on even, or only odd, numbers, and we picked something like say... if a number is odd you multiply by 3 and add 1 to it, just so it's even again, and if the number is even you halve it... I wonder if there's anything interesting about this kind of pattern 🤔
both comments here have the word man, man! cool video
Thanks man
The card game sounds like Countdown but with numberations.
Thks & I'm curious;
??Who's the mystery person behind the camera?
My man! 🤘😁
Huge shout out to based ternary... i have found some actual practical uses
Here's my reasoning about how it's the same as binary before watching the rest of the video: It's kind of like you're printing a binary number using a typewriter with empty spaces being zeroes. Your options are to move the page one space (which doubles the number) or to type a 1. And then if you've already typed a 1 you have to move the page next. Therefore you can print any binary number.
Let's analyze the representations complexities.
Given a natural number k and a natural base n, we ask about the efficiency if
1) we meet a best case
2) we meet a worst case
3) we get an expected value
Case 1)
In the first case we need log_n(k) + 1 steps.
Thus we are [k]/[log_n(k) + 1] times better than simple succession. The bigger the base n, the better it gets.
Case 2)
We want to reach a number t one less than the best case.
It has log_n(t + 1) digits.
We need to add one t-1 times each time, leaving us with [log_n(t + 1)*(t-1)]/t being asymptotically equal to
log_n(t + 1). Again: the higher the base the less steps you need to represent a number.
Though while this case gets worse and worse, the best case consistently stays arbitrarily small for any base.
Case 3)
Given a number p, we estimate to use a multiplication by n to use in every n-th step.
So with d steps we can reach a number n^(d/n)*(n/2) or something of that spirit....
This is at least the approach that came in mind when thinking about it
It gets really interesting when we weight the multiplications by n:
Consider the "best cases"
n = 2, k = 1024 and n = 10, k = 1000:
n = 2 requires 10 multiplications at cost 2 (+1 increment) and n = 10 needs 3 multiplications at cost 10 (+1).
So the smaller base performs better here.
Generally the cost will be
C(n,k) = n*log_n(k)+1
When comparing two bases it is
C(n,k) < C(n',k)
=>
n*log_n(k) < n'*log_n'(k)
log_n(k)/log_n'(k) < n'/n
But log_n(k)/log_n'(k) is constant with regard to k (and equal to log_n(n') )
So the final condition is
log_n(n') < n'/n or
log_n(n') - n'/n < 0
Now holding n fixed and solving for a maximal n' (a function value greater than 0 meaning n' is a better base than n) and repeating the process with that better value eventually yields
n = e
as a base that cannot be improved upon.
Why is it always e...
Anyhow, the best natural base is 3 in this case.
Man, this is cool
Watching this with lne hand busy
Those trees remind me of Surreal numbers
Surreal numbers next professor
Those won’t be next, but will be a topic at some point :)
Do you think youll join in with the seximal vs binary discussion with jan misali?
I agree with Jan Misali that base six is superior, and it will show up in future episodes
Can you do a video about base=i...
I did a video about base 2i (which is more functional than a base i would be)
@@ComboClass I will look for it, thanks!
That's Numberwang!
Combo class is still cool
It's only about 1.5 years old so far, so there's much more coolness to emerge haha
@@ComboClass Keep it up domotro
Burn n Branch Baby!
I also invented a new form of Magic the Gathering that mixes it with Poker
Do you guys grow bamboo?
Yeah there’s bamboo here! (although I’m not sure I’d say I grew it, since it was here when we moved in and we just trim it occasionally)
@@ComboClassI just ran into some bamboo in the wild for the first time in my life (here in Michigan). Well, more of a garden escape than "wild".
why do i have a feeling the class is getting more and more messed up, like how is it still whole?
Clever
now solve 3n+1 this way
good
I wanna meet the camera operator 🤷🏾♂️
My main camera person is named Carlo. He had a cameo in a few past episodes (I think 3 times?) but I do plan on including him more
@@ComboClass Carlo?! The mystery continues.......🔥🔥🔥🔥🔥
My homeless professor is truly smart😂
What you have to say and the way you say it keeps me coming back for more. I get caught up in what you are saying, but then something falls or a fire starts, and it pulls me out of it. it's distracting. You don't need the slapstick humor to draw people in. I would suggest that you keep everything else as-is, the backyard, the clocks and other props, feeding the squirrel, etc, but lose the slapstick. I like comedy, including slapstick, but it just doesn't fit here.
Cameraman needs to stop frigging zooming in and out while you're sitting on a bench not moving FFS it's making me nauseous