I'm happy that there are indeed people talking about half-iterates! I got to a similar result before by doing f(f(x)) = e^x, then taking the Taylor polynomial of that centered around a 2-periodic point, which I then approximated. Definitely more convoluted than simply taking the approximation as you did!
Wow! I’ve never seen a generalisation of tetration and I’ve tried to find it by myself for a long time, this video has just blown my mind, props to Taylor series for being the MVP of tetration, at last you should also mention that it can work with any number greater than zero because you can rewrite it in the form of e to the power of the natural log of that number, great work! I can’t wait for your channel to blow up!
I am very happy I found this video because I wrote a basic research essay as part of my IB diploma about tetration and how to generalize it. I covered the things you covered at the beginning of the video but then my attempt was by defining a piece-wise function that is defined differently on each interval [n,n+1]. It only requires to set the function for [0,1] to then be able to extend it to the rest of them, but of course this way the function isn't analytic, and even if it's differentiable once it's not differentiable more than once. This was 5 years ago and this video brought back many memories and made me think of the problem differently. Thanks a lot!
Wow, that was brilliant! I have actually been trying to come up with a generalisation for tetration on my own for more than two years now and at one point I had an approximation for some very special cases but nothing like what you just presentet. So I was pretty hyped up when I saw your video in my recommendations and I watched it completely and was totally hooked right from the begining. Your explanations are very clear and are paced just right. I really hope that you win #SoME3! Best of luck!
There is no generally accepted calculation for real and complex tetration, unlike the hyperoperators before it (succession, addition, multiplication and exponentiation)
If you want to do tetration like shown in the video you can do it for all rationals (because theoretically you know your approach for any a tetrated to the 1/n, and then you tetrate your result again). But actually taking limits instead of just approximating them with that approach is not gonna be possible I think. And before we get ahead of ourselves we have to ensure continuity before taking a limit in the first place. Also you might find this way of approaching tetration having some undesirable properties or missing some that you wanted it to have (like differentiation).
Interestingly, the most natural derivation of tetration is using ordinals, objects in formal logic. I would describe them as the unique numbers with the property that for every set of ordinals, there is a smallest ordinal above every ordinal in this set. *You aren't allowed to make a set of all ordinals. This is enough to play with for hours.
Great video! Half iterates and the like (and the tricks to compute them) have always fascinated me, and this video scratches that itch nicely. Thanks for making it!
Yes, this process could definitely be made more efficient (as I said at the end)! This is the problem solving route that I took when initially attempting this problem for myself, so there are definitely some places it could be made more streamlined.
yea like the nested function notation already exists and looks quite familiar, especially: fⁿ(fᵐ(x))=fⁿ⁺ᵐ(x) or for an even more familiar form: fⁿ ○ fᵐ = fⁿ⁺ᵐ
@@user-pr6ed3ri2k Exactly, and this notation is also used in the video. He spends a significant part of the video first inventing and introducing his own notation, then using it to explain the recursive part, and then introducing an equivalent notation and proving they're equivalent. @ OP: I think it would've made the video better and much easier to understand, if you had only introduced the usual notation and used that to explain the recursion, instead of taking this detour, that doesn't add anything but complexity. Even if that was the way you yourself got to the solution, it doesn't hurt to streamline the path a bit for the video.
To me the most interesting things about tetration is the possibility of creating an inverse function. After all, almost every new type of number was created as a result of an inverse operation derived from simple unary operations. Assume we only have the number 1. With the unary operation NOT, you can find 0. Now with these two numbers alone it opens up the ability to identify truth or falsities. Addition is simply repeated incrementation, another unary operation, and the inverse of addition is subtraction. Now that you have the ability to unlock all whole numbers, you can count things! But subtraction brings up the question- what happens when you subtract more than you have? Well, you get the negative numbers! Okay, so we can count debt of whole numbered things now. But what if we want to repeat addition multiple times to compress on-paper work? Well, you get multiplication, with its corresponding inverse, division. Now you can represent numbers which are only parts of a whole object, the rationals! And repeated multiplication? Exponentiation, along with roots (which are still really just exponentiation) and logs! Thanks to being able to place our earlier-discovered numbers into the operands of roots and logs, we find two new things: irrationals and complex numbers! But this brings up the question- if all of these numbers are simply the result of extending the input domain of inverses of repeated incrementation, then could there be a new type of number originating from the inverse of tetration? Perhaps transcendentals become possible to evaluate naturally? Maybe a natural implementation of quaternions without just assuming that such an extension already exists? Or perhaps something entirely different, that we can't even begin to understand the purpose of until we discover it?
The problem with your analysis is that it is just inaccurate. The complex numbers and the real numbers are not mathematical structures defined in terms of "inverse operations" applied to other mathematical structures. Mathematical structures, in reality, are defined in terms of axioms they satisfy.
@@angelmendez-rivera351 I'm not necessarily saying they're "defined" in terms of these inverse functions, but rather that they can be used to extend the domain and range of their corresponding functions. If it were absolutely the fact that complex numbers did ot exist, then Y = the square root of X would not exist in the domain X
@@GameJam230 *I'm not saying they're "defined" in terms of these inverse functions...* Uh, no, this definitely is what you said. I know how to read. I quote: "After all, almost every new type of number was created as a result of an inverse operation derived from simple unary operations." *Assume we only have the number 1. With the unary operation NOT, you can find 0.* No, this is not how the NOT operation works. The NOT operation is an operation in Boolean logic, it does not apply to natural numbers, much less any extensions thereof. *Now, with these two numbers alone, it opens up the ability to identify truth or falsities.* Sure, you can map TRUE to 1 and FALSE to 0, but this is not the system of natural numbers, nor can you construct the system of natural numbers from this. *Addition is simply repeated incrementation, another unary operation,...* No, this is factually incorrect. In Boolean logic, addition is defined by 1 + 1 = 0 and 0 + P = P + 0 = P. There is no incrementation operation in Boolean logic. *Now that you have the ability to unlick all whole numbers, you can count things!* The only numbers you can unlock in Boolean logic are 0, 1, and these are not natural numbers, but truth-values. You cannot construct the natural numbers from this structure alone. This is why, in set theory, the set of natural numbers is said to exist axiomatically. In abstract algebra, the natural numbers are, again, defined axiomatically. They are never defined in terms of Boolean logic. Also, if you want to count, then you need the class of cardinal numbers, not just the set of natural numbers. *But subtraction brings up the question- what happens when you subtract more than you have? Well, you get the negative numbers!* See, you are doing the thing you said you were not doing: claiming that the various number system extensions are created via inverse operations. The integers are actually defined in the context of group theory, not in terms of subtraction. The same applies for all other number systems. They are defined by axiomatizing mathematical structures. The fact you can define new operations on extensions of previous structures is purely a coincidence (since it does not always work). *But this brings up the question- if all of these numbers are simply the result of extending the input domain of inverses of repeated incrementation,...* See? You did it again. *...then could there be a new type of number originating from the inverse of tetration?* It is conceivable that there may be some axiomatizable mathematical structures which can be axiomatized to be extensions of current structures we use, AND such that tetration, when formally axiomatized, happens to be both surjective and injective in them. However, it being conceivable does not mean it is possible. The research in abstract algebra with regards to tetration is very minimal, but what little does exist, it does not support the idea that an extension of these mathematical structures with the desired properties exists.
@@angelmendez-rivera351 Okay, yes, I phrased a few things poorly to how I actually meant them, but I'd like to point out how one of your first points was complaining that I was using boolean operations in the natural numbers, and then the next 3 points were complaining that (and how) I was using arithmetic operations in the boolean system, which is just contradictory. Did you ever consider that we can talk about MORE than one system of logic at the same time, and that maybe, JUST MAYBE, they are all in some large way connected? Point is, I just came here to explain a neat thought I had, and you decided it wasn't good enough, so I tried to explain what I actually meant, and your first thought is "Yup, I should pick apart absolutely every quote I can from this man's speech and completely shit on anything he's ever dared to think of". People like you are the reason why I will never enjoy math enough to actually consider doing it in school- because there's no room anywhere to merely SUGGEST AN IDEA or ASK A THOUGHT-PROVOKING QUESTION unless every word of it means EXACTLY what it says, instead of the listener being able to apply a little common fucking sense and consider what the speaker actually MEANS.
That has been a mind-boggling question of mine for two years now and i've come up with two inverses of tetration: tetration-root and tetration-logarithm, as the functions that give the base or height respectively. I've found no extensions beyond natural numbers so far
I really hope that all the mathetmaticians agree on expanding this marvellous monster operation and getting inspiration from this video! Congrats for this great video! 👏👏👏
Thinking about repeated exponentiation started the chain of events which eventually led to my mathematics obsession, and I'm now in my last semester of undergrad in maths! Cool to see a video on this topic
C++ and FreePascal (which I use) are the best. By the way, it's very satisfying to implement your own Math library from scratch. I had to do one, when I realized that complex numbers implementation was awful in FP, so I implemented them myself, along with all math stuff up to Lambert W function. It was fun and I learned a lot. And I admire if someone does its math job using programming, and doesn't use things like Wolfram Mathematica.
Might mention that addition, the level one operation, can be considered iterated counting, or adding 1 repeatedly. 2+2 is 2, 3, 4. This makes counting the level zero operation ,which is an interesting parallel to exponents of zero returning 1.
I like a lot this video, and I think this channel will be a great channel of maths. I wish you the best and I be waiting more content like this. Continue like this!!
Can the ‘hyperoperations’ themselves be generalised. I.e if addition is the first hyperoperation, multiplication the second, is there any meaning to a 1.5th hyperoperation?
Why don't you take a look into the question and tell us what you find. 😛 Seriously, I wonder the same thing. If you define h(n, a, b) := the nth operation applied to a and b, so that for example h(2, 3, 4) = 3 * 4 = 12 and h(4, 2, 2) = 2 ↑↑ 2 = 16, then can you somehow evaluate h(1.5, 2, 3)? From what I've gathered, h(n, 2, 2) should be 4 for all values of n; that might be a useful clue. Come to think of it, you might be able to use a technique similar to @tetrolith to get an approximate value.
I found a blogspot post by user paurea called “Supersum? Subproduct?” discussing this exact problem (operation 1.5), and they used Lie algebra stuff or something (didn’t understand that part lol)
I tried solving the equations at 20:25 analytically. Yeah... not gonna' happen. However, after solving these numerically, _I got different results_ from what you have: a≈0.480784, b≈1.48769, c≈-0.848109, giving a+b+c = 1.12037 a≈0.496487, b≈0.845326, c≈0.340039, giving a+b+c = 1.68185 There are six additional complex-valued triplets. I did plug these triplets back into the original three functions and verified they do indeed work. Therefore, I don't know where the problem is. It could be the equations were transcribed incorrectly, but the following looks right: a+ab+(a^2)c=1, b^2+2abc=1, bc+2a(c^2)+b(c^2)=1/2 I don't have the time to redo your derivation today, but I could look into it later this week if enough people want me to.
I tried plugging in your values myself and I found your problem. Your a, b, and c satisfy bc + 2ac² + bc² instead of bc + 2ac² + b²c (note the last term!) Maybe you could retry your calculations haha
The third equation onscreen at 14:32 can be derived from the top and bottom equations. The other three equations are the three axioms of function iteration.
Man, this is so funny. I spent months working on the same problem and we both took the same approach with truncating the Taylor series and using software to calculate the coefficients XD. The best my software could do was an 18th order approximation, but I soon realized that there's actually multiple solutions for the coefficients, which gave me doubts (in fact, there is a continuum of solutions for the full series expansion). I'm sure you're aware of the many tetration forums which use more advanced methods (that go way over my head), but fascinatingly, no one appears to know what the "correct" analytic solution is yet. I am amazed that this is still an active area of research. I will bow down to whoever can find a nice formula for tetration over non-integers, be it the coefficients of the taylor series, or even an integral like the gamma function. Thanks for bringing more awareness to this problem. Great video!
And if anyone wants to go down a rabbit hole like me, you'll find echoes of someone named "Kneser" who apparently beholds the (unproven) but widely-believed-to-be canonical solution for tetration.
@@angelmendez-rivera351 Sorry, I probably am wrong about that. It was my understanding that his solution was proven to be the unique solution, if it is indeed a solution. Perhaps it was also proven that the solution is valid as well?
I wonder how unique the prescription f(f(x)) = e^x at x = 1 for e tetrated 1/2-times is. If I just take a simpler example, f(f(x)) = x doesn't give unique f(x), because some possible solutions are x, -x, 1/x, -1/x. But the two x, 1/x give the same value at x = 1. I am not sure whether f(f(x)) = e^x only implies a unique solution, or multiple solutions and whether those coincide at x = 1. More generally, what are the solutions for f(f(x)) = g(x) for a known g? I assume with more and more repeated tetration, the number of possible solutions might grow, so I think we're talking about some "principal solution" here, defined via the Taylor series. I can imagine how to extend this to rational numbers, p/q: first, find 1/q from f(f(f(...f(x)...))) = e^x (left-hand side is nested q-times), then take the resulting function of x, f(x) and nest it p-times, f(f(...f(x)...)) (p-times) and plug in x = 1. Real numbers would probably work the same, find an approximation for r ~ p/q, call the result the approximation for e tetrated r-times. Question: how do we know that this (rather opaque and complicated process) results in a smooth function, i.e. if p1/q1 and p2/q2 are somewhat close, are the tetrated results somewhat close (in a continuous sense)? Only then it makes sense to extend it to the positive reals. Finally, any idea on how to extend to complex numbers? e tetrated i times, anyone? ;))
We cant go further to the negative number tetrations ? That was my doubt brother. Also you are absolute genius because you are the only one ive seen do it in this huge platform bro , you deserve more support , thank you man!
In one of the Ramanujan's Lost-Notebooks you would find a general efficient method to compute taylor expansion of iterates of f(x) = exp(x) - 1 (hence discovering Bell numbers long before Bell). I'm not sure but I guess he was interested in generalizing those coefficients to non-integral values.
Can you give something like notebook and page number? I have done inefficient ways of fractional iteration of g(x) = ln(x + 1) (which you can then combine with integer steps of your f(x) to get fractional iterates of f). Btw in the C and Python math libraries, f = expm1 and g = log1p.
0:24 - 0:42 No, this would be incorrect. If you apply the operation 3 times, then you have 2 (+ 2) (+ 2) (+ 2) = 2 + 2 + 2 + 2 = 2•4. To put it differently, +(2, +(2, +(2, 2))) = •(2, 4). The 2 appears 3 times in the sum denoted by 2•3, but the + operation only appears 2 times. Also, the idea that multiplication is "repeated addition" is also just incorrect in general, and is only legitimate when working specifically with natural numbers. It does not hold when multiplying rational numbers (e.g., 2/5•3/7), real numbers (e.g., e•π), complex numbers (e.g., (2 + i)•(1 + 5i)), matrices, functions, or any other type of mathematical object. The actual definition of multiplication is that it is some binary operation which distributes over addition. In general, there actually are multiple such operations, so you need to specify which multiplication you are working with. 0:53 - 1:03 This is the same mistake as earlier. 2(•2)(•2)(•2) ≠ 2^3, but 2(•2)(•2) = 2^3. The operation occurs 2 times, not 3 times. The exponent tells you the number of "copies" or "ocurrences" of the number being multiplied, not of the operation itself. This mistake could easily be avoided if you said instead "2•3 is 2 added to 0 exactly 3 times" and "2^3 is 2 multiplied to 1 exactly 3 times." In general, you define a function f[a](x) = a#x, where # is some arbitrary binary operation, so f[a](1) = a, and in general, (f[a]^n)(1) = a%n, where % is a new binary operation, the "repeated version of #", where f^n is the nth iterate of f. Notice, though, that this is only well-defined for natural numbers n. 2:20 - 2:28 This is the same mistake again. The correct definition is given by defining g(m) = 2^m, and then saying 2^^3 = g(g(g(1))). Notice how g is applied 3 times. 4:06 - 4:16 This is not the commutative property, because this property, while true, is not named "the commutative property." For a given binary operation #, the commutative property for said operation states that x#y = y#x. If # = ^, then the commutative property says that x^y = y^x. This is clearly untrue: 2^3 = 8, but 3^2 = 9. 4:38 - 4:40 This is also incorrect, as exponentiation does not distribute over multiplication. Yes, (x•y)^n = (x^n)•(y^n), but one does not have x^(m•n) = (x^m)•(x^n). Instead, one has x^(m + n) = (x^m)•(x^n). Therefore, ^ does not distribute over •. 6:08 - 6:10 Well, hold on. x^0 ≠ x/x in general. You are assuming x^(m - n) = x^m/x^n prior to proving it. If you want to solve x = x•x^0 for x^0, in the assumption that the equation holds for all x (which is a requirement), then note that this therefore means x^0 = 1, solely because x = x•1 for all x. Therefore, x^0 = 1 holds true, even for those x for which x/x is not well-defined. This is important when x is, for example, a function or a matrix. 6:10 - 6:32 A more natural way of having handled this would have been to prove x^(m + n) = (x^m)•(x^n) via the recursion, and the insisting that m, n should be allowed to be arbitrary integers, and not merely natural numbers. 8:17 - 8:21 This rule cannot work unless you restrict x to be a real number, and specifically, x >= 0. I suppose you were going to do this restriction anyway, but noting this is important: the extension you are about to attempt can never possibly work for arbitrary x. 9:36 - 9:39 Again, calling it the commutative property is incorrect, and in fact, misleading. 12:03 - 12:24 This was unnecessarily complicated. All you need to do is realize that applying a unary function 0 times is the same as doing nothing, which is the same as applying the identity function. This is more natural, more intuitive, and is not prone to the erroneous recursion solving you did earlier in the video, and which you almost did again here. 13:23 - 13:29 The criticisms to the previous section apply here as well. 14:03 - 14:11 And this is the reason I criticized your approach earlier. Your argument relies on the function A being invertible here, yet the 0th iterate of a function is always the identity function, regardless of whether it is invertible or not. Only the negative iterates should actually depend on the existence of an inverse. 15:47 - 15:55 Right, so the problem here is that ln(-1) and ln(-2) are not well-defined (some people would say it is multivalued, but that is just a fancier way of saying "not well-defined"). When it comes to defining the iterates of a function, the domain and codomain (and range) are very much relevant. I mean, to begin with, function composition is only well-defined when you take the domain and codomain (and range) into account. In this case, the exponential function exp has domain R, but the range is (0, ∞), not R. Therefore, you will run into problems if you are not careful about this. In general, exp^n has range (e^^(n - 1), ∞) when n > 0. The range of the identity function is R, but the inverse function of exp, ln, does not have R as its domain, but rather (0, ∞) as its domain, and R as its range. Further negative iterates have the domain even more restricted: (e^^(n - 1), ∞) is the domain of ln^n = exp^(-n). So, in your column for n = -1, everything above x = 1 should be empty. 16:08 - 16:12 No, they are not well-defined. 16:12 - 16:19 If you are going to avoid the topic, then I would recommend that you avoid making a factually incorrect statement on said topic, even if it is meant to make it "easier" to understand. It would be even easier to understand if you had simply left those slots on the table empty. 19:49 - 20:00 This is problematic. You are assuming that, by replacing exp by its second-degree Maclaurin polynomial approximation, and that by assisting that f also be a second-degree polynomial, that this polynomial will indeed be a suitable approximation for the exact solution to the equation f°f = exp. However, this methodology is false in general, so it is important that you do prove that it works in this specific case. In fact, this is probably the most important section of your entire video, yet you just skipped it entirely, and just took for granted that it works. I know you are doing this for illustration purposes, but your video has not made it clear that what you are doing would normally require justification at all, nor does it clarify that you are indeed making an assumption for the sake of simplicity. As such, there are going to be plenty of viewers who will watch this part of the video, and take whatever the result here is, and accept it as irrefutable fact. The greatest mistake here, though, is that you are assuming that f°f = exp is an equation with a unique solution, but this is not the case: the set of all solutions to this equation is uncountably infinite. As such, exp^(1/2)(1) is just not well-defined at all. You mentioned the existence of an "accepted" value for e^^(1/2) (which is debatable), but this accepted value definitely does not come from solving this equation over the real numbers, so the fact that you neglected to mention this is a huge problem. ---------- Overall, I can see you did put in a lot of effort into the video. The visual presentation was simple but effective, and I do appreciate that you kept the video rather to the point without going off-topic unnecessarily. That being said, the quality of your videos would improve drastically if you avoided using mathematical terminology incorrectly, as that achieves nothing except misinform people, even if this is not your intention. Also, I think you need to make the purposes of what you are bringing up at any given time in the video clearer, so that assumptions meant for the sake of simplicity are not treated by viewers as facts (which is a problem I have observed in the comments section). Keep it up!
Thanks for the feedback! There is definitely lots of room for improvement. Most of the things you pointed out were done in order to make the video more accessible (for example, "defining" multiplication as repeated addition, as this concept was helpful later on when looking at repeated operations). There are a ton of formalities I totally skipped over for time and to maintain the flow of the video. My erroneous use of the term "commutative property" is inexcusable, though! Again, thanks for the valuable feedback!
@@Tetrolith-ko5yuIf it can interests you, I found thanks to someone an exact formula (limit) to calculate tetration for much larger values! Even in the complex plane (complex hight in some cases). Technically, it has already been found (Schroeder and Koenig), but if you're interested, tell me!
Over the past few month I am investigating Greham's number, and I wonder if there is a way to generalise Knuth's up-arrow notation with non integer. I wonder if this method could be generalised to a↑↑↑b where a and b are non-integer.
I wrote an entire paper about "nth compositional roots of functions" which was your question on f(f(x))=e^x. In the paper, I proved under what conditions a one-to-one function has a nth compositional root. Or, in your terminology, given some g(x), when does f(x) exist such that n nested functions of f(x) = g(x).
I take issue with the way you showed there is no distributive property. 4:43 when talking about the distributive property of multiplication, we use addition. 4(x+y)=4(x)+4(y) when talking about the distributive property of exponentiation, we use multiplication. (xy)^2=(x^2)(y^2) so why are you using multiplication when talking about the distributive property of tetration? shouldn't you instead be using exponentiation? I would like to know if the distributive property does exist or not in this way, but I guess I can look that up in my own time.
Approximate a bunch of values with close inputs (ex. b tetrated by 0.01, b tetrated by 0.02, ... b tetrated by 0.1), and use those values to approximate the nth derivatives of ˣb at x = 0. Then use THOSE values to approximate the coefficients of the maclaurin series for ˣb, and plug in any complex number you'd like. Theoretically speaking it should work, though it would be extremely inefficient to try, and would take a while just to get the first digits right.
This is amazing! I first learned about tetrations last year. I never knew tetrations can go that far! Maybe there could be a possible operation of repeated tetrations that can go beyond our knowledge!
11:00 It's function recomposition. Another notation would be $(x + 1)\overset{3}\circ x$, being $\circ$ the composition operator, and the reading as "passing $x$ through the $(x + 1)$ function $3$ times".
With your polynomial approximation, it seems that it is most accurate around x=0, since f(f(1)) = exp(1) that also means f(f(0)) = 1 and f(0) is e tetrated -1/2 times So exp(f(0)) is e tetrated 1/2 times In your 2nd degree polynomial, f(0) is 0.4979, which gives exp(0.4979) = 1.64526 Which is already a lot closer to the actual result.
Is using 3 parameters to the "exp" function (a superscript, a subscript, and a regular parameter) standard? I didn't know what this meant and the internet didn't help. Also, is there any way that non-integer hyperoperators (eg the 1.5th hyperoperator between addition and multiplication) make sense? :)
I'm no slouch on reasonable pure mathematics, but I can't claim to be an explorer of the outer edges. So I have a question. Aside from perhaps a few instances of simplified notation, are there any applied uses for tetration? Or theoretically, quadration, pentration, sextration, etc? Not that I'm dismissing the field - there is validity in considering the nature of iteration of iteration. I'm just curious to know.
According to this logic 1/3 tetration of e would mean we need to find f(x) such that f(f(f(x))) = e^x and then do f(x), and 2/3 would mean we do f(f(x)). 1/pi and 1/e tetration and anything like that are the limits of approaching those numbers from rational numbers (a lot like taking something to the power of an irrational number). Unfortunately using the Taylor Series method in the video to find even a quadratic approximation of f(x) such that f(f(f(x))) = e^x gives formulas that are too complicated for any website I found to be able to solve.
News: after tampering around in desmos, I found an approximation for a quadratic formula that when iterated three times f(f(f(x))) is VERY close to e^x (at least for small values), here it is: 0.2x^2+0.81x+0.4 This makes e^^1/3 approximately equal to 1.41 and e^^2/3 approximately equal to 1.94
Passionnant ! Dommage que je ne comprenne pas complètement l'anglais, mais les mathématiques sont une langue universelle et j'ai pu suivre le raisonnement... mais c'est une vidéo à revoir et réécouter pour bien comprendre. MERCI pour ces démonstrations.
I don't have one, but I did notice that for increasing the degree of the approximation, the coefficients of the Taylor approximations of f(x) tended to converge, if that counts for anything!
I'm enjoying this, but it grates that the video makes the 'fence post error' right out of the gate and then keeps making it. '2 x 3' does not mean 'add 2 to itself 3 times', it means 'add 2 to itself twice'. It's the number that shows up n times, whereas the operation shows up (n-1) times.
Yes, and this is all due to a lack of rigor and a poor understanding of the language. These small details are ultimately important, and they tell me a lot about how much one should trust a mathematics communicator on UA-cam.
In fairness to the creator of this video, I think that other than that aspect, it's very good. Also, SoME (which is the context for this video) is a superb initiative that's supposed to be about new maths popularisers trying new things, and that will inevitably involve some stumbles. So I salute the creator, and I hope he reads this and is not discouraged. Furthermore, I am NOT particularly trained or knowledgeble about any of this stuff, I am very much a stumbling amateur myself. So what follows may be wrong. But having reflected on it some more after watching this video, I think that instead of '2 x 3 means add 2 to itself twice', the best way of expessing it MIGHT be to say '2 x 3 means add 2 to THE IDENTITY UNDER ADDITION three times'. In other words, '2 x 3 means take ZERO and add 2 to that, doing this 3 times in succession'. Then '2 ^ 3 means take ONE and multiply that by 2, doing this 3 times in succession'. If you try to extend this to tetration, you will hit difficulties, because there isn't (straightforwardly) an identity under exponentiation. But then if you explore that fact, I think it takes you into similar territory to the ideas explored in the rest of this video.
Because we want to evaluate at x=1,shouldnt we take the Taylor series on X=1 instead of X=0? I mean the error with exp(X=1) of the Taylor series is 0.21 over 2.71... The algebra part would then be the same I guess, using X+1 instead of X everywhere? I haven't done it so I don't know if it would end up giving the same answer TBH, but I sounds more correct to me to do it on X=1...
Last night, I stumbled upon the concept of commutative and fractional hyperoperations, and of course I get recommended this today! Great explanation of tetration! Are you at all familiar with commutative exponentiation and the like? It looks like f(a,b) = e^(ln(a)*ln(b))
In exponentiation positive power shows multiplication and negative power shows division. opposite of positive is negative just like opposite of multiplication is division , so In Tetration positive power shows exponentiation and negative power shows logarithm(log)
That was super interesting, the idea of going back to nested functions' proprieties to grasp T(e,1/2) is great ! I was stuck thinking that f(x) = T(x, 1/2) was such that T(f(x),2) = x (which makes f(x) = ln(x)/W(ln(x)) ) but your way seems more convincing 😂 the only thing left is… to find an expression of T(e,x), or really T(x,y) for x,y > 0… I have an idea of something silly involving partial derivatives where the function (T(e,y))^x would arise, tell me if you're interested to see it x')
well @@lunaticluna9071, have you heard about fractional derivatives (if not just check wikipedia or youtube) ? My idea is simple : instead of taking the α-th derivative for a given α, it could be interesting to take the partial derivative of (f^(α))(x) with respect to α, making it a sort of "2nd order derivative" of f. But then really we could derive this with respect to alpha a second time, or any integer number of times, or even any real number of times using the fractional derivatives again. Then if we derived β times with respect to alpha, we can now derive with respect to β to have the 3rd-order derivative, and so on and so forth… And here we face something that is reminding of hyperoperations but applied to functions (let's say analytical functions at first, easier to deal with). The funny thing is that for any n, the function whose n-th-order derivative is itself (and such that f(0)=1) seams to be (T(e,n))^x (x being a simple exponent)… But now, let's say we managed to find an easy way to deal with n-th-order derivatives for any n, there might be a way to get the α-th-order derivative (again, alpha a positive real number), Ik I'm going kinda fast here, but logically we should find that the function whose n-th-order derivative is itself would be (T(e,α))^x but yeah hard to dive into the details in a youtube comment x')
I kept getting confused as to how 2 to 3 tetrated = 16 then how 2 to 4 tetrated was 65536 but after watching this I gained the proper knowledge on how to preform the function, by simply going down the tower of power I go say 2^2 (for the very top being used to use exponentiation the part of the tower 1 down ) which is 4 then it goes down to next 2 making it 2^4 which is 16 then since no part of the tower remains its just the original value of 2^16 which = 65536, god I love when I finally understand math :D
5:22 I haven’t watched the video Ye this but I already know a short way. I start with the example x tetrated by 3, this is he same as x to the x to the x. In general x tetrated by n is x to the x n/2 times. now I’ll take the natural log of x tetrated to 3, this is the same as the natural log of x to the x to the x. We multiply the powers to get the natural log of x to the x times x, which is the same as the natural log of x to the x squared. Now a property of logarithms is that the log (including natural log) of x to some power is the number in the exponent section times the log of x. So we fix our equation as x squared times the natural log of x. In our equation we can now say x tetrated to n is the same as x to the n-1 time the natural log of x. And it’s reasonable (and true) to assume this works for all numbers. Therefore we just take e to the power of this formula and we have an equation for tetration.
This video was amazing! I've looked into this same topic a while back and I wish I could have found a method as creative as yours. Although I have one question, where did you find the value for e^^1/2 that you showed at 21:46 ? I remember scouring through the internet for stuff about tetration and I've seen no mention of that.
en.citizendium.org/wiki/tetration#Taylor_expansion_at_zero lists the coefficients of a Taylor series centered at 0 for the xth tetration of e. I used the Taylor series to get a value for the 1/2 tetration of e up to enough precision where it disagreed with my approximation. I've listed the link in the comments now!
Thanks for the video, it was very enjoyable to watch! I’m a bit confused on the explanation at 21:03. The 3 equations are the same as what I got when expanding, however using wolfram I got answers (a≈0.480784, b≈1.48769, c≈-0.848109) or (a≈0.496487, b≈0.845326, c≈0.340039). But my main point was that none of these 3 solutions look the same if you plot f(f(x)) against the original polynomial, which I doubt is a problem the arises from a lack of arithmetic precision. Could it be from the fact that you assume that f(x) is a 2-order polynomial instead of, say, a polynomial with fractional powers? Additionally, how come you (seemingly) brush off the coefficients of higher power? To me, it seems like they’d serve as useful additional restrictions if they’re set equal to 0. Addmittedly I don’t know much about how functional square roots are calculated, so is this a standard technique when solving for them? Of course, I plugged the SOE with more restrictions into wolfram aswell, who claimed there were no solutions
Thanks to your comment, I just noticed there is a typo in that slide! You'll see that the line at the top has b^2*c, while the line underneath (which I'm assuming you used) has b*c^2. The top line is correct, and you should get the solutions given in the video. However, the systems of equations do (mostly) have multiple solutions. For these, I picked the solutions that had coefficients closest to 0 and the ones which had coefficients similar to those of previous degrees (yes, I know it's extremely informal!) I'm not certain what the other solutions could represent or if they give the same value for the 1/2 tetration of e - another fascinating extension of this problem I didn't have time to include! The main reason I discarded the higher power coefficients was because, in a sense, not enough information about those degrees is known. I tested out these systems of equations as well, and part of the reason could be that introducing these restrictions gives 5 equations for 3 unknowns, essentially overlimiting the solution possibilities. The fact of the matter is that the coefficients of the Taylor series for e^x are NOT equal to 0, but they're not specified in the approximation, so I see these higher coefficients similar to how information is thrown away when dealing with calculations with significant digits. Thanks for the great comment!
@@Tetrolith-ko5yuthanks for your reply! I just checked the calculations with the correct equations, and indeed got the same answer as you did. And thank you for clarifying your process for eliminationg the higher order coefficients! However, i’m still a little confused about how ”bad” of an approximation f(f(x)) is for p(x)=1+x+1/2x^2. I know that you talked about how it compares to e^x, but intuitively it feels like f(f(x)) should at least be able to approximate p(x) quite well, whereas it only does that near 0. Perhaps there could be a non-polynomial solution that works as a better approximation? Maybe i’m missing something obvious
Nice video, I was hoping to find a generalised function that could give us the irrational tetration of any number at all say 1/2 tetration x. Care to share links I could find such an identity
@@ILSCDF It's a unary operator, usually called the successor S(x) (see Peano axioms). But in the hyperoperations family, for consistency, it is usually defined as H_0(a, b) = b + 1, where the first operand "a" is ignored.
@@ILSCDF I'd consider it a degenerate case of a binary function at most. The Wikipedia article on hyperoperations seems to agree: "Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument."
Why do you say that 2 X 2 X 2 involves three operations instead of just two? Clearly the OPERATION involved , [i.e multiplication], has only been applied twice. And the notation 2^2 = 4 says, take 2 times itself TWO, not three, times.
Tetration is not repeated exponentiation as it is expressed, atleast not on the classical sense, see repeated exponentiation can be viewed in 2 ways, one of them is repeated exponentiation to the base which would look something like this: ((2^2)^2)^2)... N times = 2^(2^N) and repeated exponentiation into the exponents, which would look like standar tetration. because on the operations of lower order ( multiplication, addition) the order of the terms doesnt affect the result, this distiction of repeating an operation into the "base" or into the other terms give you the same result, meaning that their iterations (operation of higher order) can be expressed into a solely operation, this breaks down for operations higher to exponentiation, You cannot define the iteration of exponentiation (tetration) as a single operation for this reason.
Trips me up a bit that iterated composition (composition as in (f;g)(x) = g(f(x)) ) of a function with itself is shown here with a left subscript when I'm used to seeing it written for example as (f;f;...;f)(x) = f(f(...f(x))) = f^n(x). Otherwise this is a fantastic bit on the subject of power towers!
Why is tetration 2^(2^2) and not (2^2)^2 ? The latter would make more sense to me in the sense of hyperoperations, any insights would be helpful. Thanks!
They are two different operations. 2^(2^2) is right tetration while (2^2)^2 is left tetration. Standard tetration was defined to be made on the right because of exponentiation notation 2^2^2 = 2^(2^2).
Others have suggested chasing down *irrational* tetrations ... Or checking out *repeated* tetrations ... but how about trying to establish if *complex* tetrations (involving roots of negative numbers) are possible ... I don't mean actually doing them: just see if: they can be proven to be possible or: they can be proven to be impossible or: do both of these proofs occur simultaneously or: are they impossible to prove with our current understanding of Maths.
If you accept that a Taylor series that produces arbitrarily accurate values is a good enough solution, then you can plug in complex numbers, square matrices, and so on into the Taylor series.
We could generalise tetration to rational numbers. But how about generalisation of errr... Well, if f(a,b,1)=a+b, f(a,b,2)= a*b, f(a,b,3)=a^b, then f(2,3,3.5)=?
You probably know this but x↑↑(1/a) is the inverse function of x↑↑a or x↑↑(1/2) is the inverse of x↑↑2 which is x^x which we know the invese of: e^LambertW(ln(x))
I haven’t seen this particular style of notation using a leading subscript, but there are a few other ways I have seen, such as something like f²(x) = f(f(x)), which matches the inverse function notation f^(-1)(f(x)) = x. Sometimes the superscript is preceded by the ring operator used for function composition (f∘g).
I just remembered, I’ve also seen a version of iterated composition that was like the sum Σ or product Π operators. It used the Cyrillic letter И which I think stood for the Russian word for “iteration”. It looked like И_{x→5}^{3}(2x+1) = 2(2(2(5)+1)+1)+1 = 47. Try pasting that into a LaTeX viewer if you want the full experience.
I'm happy that there are indeed people talking about half-iterates! I got to a similar result before by doing f(f(x)) = e^x, then taking the Taylor polynomial of that centered around a 2-periodic point, which I then approximated. Definitely more convoluted than simply taking the approximation as you did!
Knowledge? 10
Explanation? 10
Graphics? 10
Calming zen music? 100
👏👏👏
This is a stock piece I’ve heard on other channels before and it always gets me in the mood to do some math lol
@@redpepper74 Medication -MoPoMedi
Wow! I’ve never seen a generalisation of tetration and I’ve tried to find it by myself for a long time, this video has just blown my mind, props to Taylor series for being the MVP of tetration, at last you should also mention that it can work with any number greater than zero because you can rewrite it in the form of e to the power of the natural log of that number, great work! I can’t wait for your channel to blow up!
I am very happy I found this video because I wrote a basic research essay as part of my IB diploma about tetration and how to generalize it. I covered the things you covered at the beginning of the video but then my attempt was by defining a piece-wise function that is defined differently on each interval [n,n+1]. It only requires to set the function for [0,1] to then be able to extend it to the rest of them, but of course this way the function isn't analytic, and even if it's differentiable once it's not differentiable more than once. This was 5 years ago and this video brought back many memories and made me think of the problem differently. Thanks a lot!
Wow, that was brilliant! I have actually been trying to come up with a generalisation for tetration on my own for more than two years now and at one point I had an approximation for some very special cases but nothing like what you just presentet. So I was pretty hyped up when I saw your video in my recommendations and I watched it completely and was totally hooked right from the begining. Your explanations are very clear and are paced just right. I really hope that you win #SoME3! Best of luck!
Wonderful video! But could you generalize it to irrational values using limits like what you can do for exponentiation?
There is no generally accepted calculation for real and complex tetration, unlike the hyperoperators before it (succession, addition, multiplication and exponentiation)
If you want to do tetration like shown in the video you can do it for all rationals (because theoretically you know your approach for any a tetrated to the 1/n, and then you tetrate your result again). But actually taking limits instead of just approximating them with that approach is not gonna be possible I think. And before we get ahead of ourselves we have to ensure continuity before taking a limit in the first place. Also you might find this way of approaching tetration having some undesirable properties or missing some that you wanted it to have (like differentiation).
@@hexaV_ Agreed. I've tried thinking about how and it makes me tired.
Interestingly, the most natural derivation of tetration is using ordinals, objects in formal logic.
I would describe them as the unique numbers with the property that for every set of ordinals, there is a smallest ordinal above every ordinal in this set.
*You aren't allowed to make a set of all ordinals.
This is enough to play with for hours.
You are allowed to make a proper class of ordinal numbers. Classes are a more natural and fundamental type of mathematical object than sets.
Great video! Half iterates and the like (and the tricks to compute them) have always fascinated me, and this video scratches that itch nicely. Thanks for making it!
No one knows this is another channel.
Two of my favorite UA-camrs
herro
Nice result, but I feel like in the middle you just introduced an unnecessary amount of notation that got you nowhere.
True, but this is also how he found it easiest which would probably mean that it could be made simpler
Yes, this process could definitely be made more efficient (as I said at the end)! This is the problem solving route that I took when initially attempting this problem for myself, so there are definitely some places it could be made more streamlined.
yea like the nested function notation already exists and looks quite familiar, especially:
fⁿ(fᵐ(x))=fⁿ⁺ᵐ(x)
or for an even more familiar form:
fⁿ ○ fᵐ = fⁿ⁺ᵐ
I found the notation necessary
@@user-pr6ed3ri2k Exactly, and this notation is also used in the video. He spends a significant part of the video first inventing and introducing his own notation, then using it to explain the recursive part, and then introducing an equivalent notation and proving they're equivalent.
@ OP: I think it would've made the video better and much easier to understand, if you had only introduced the usual notation and used that to explain the recursion, instead of taking this detour, that doesn't add anything but complexity. Even if that was the way you yourself got to the solution, it doesn't hurt to streamline the path a bit for the video.
To me the most interesting things about tetration is the possibility of creating an inverse function. After all, almost every new type of number was created as a result of an inverse operation derived from simple unary operations. Assume we only have the number 1. With the unary operation NOT, you can find 0. Now with these two numbers alone it opens up the ability to identify truth or falsities. Addition is simply repeated incrementation, another unary operation, and the inverse of addition is subtraction. Now that you have the ability to unlock all whole numbers, you can count things! But subtraction brings up the question- what happens when you subtract more than you have? Well, you get the negative numbers! Okay, so we can count debt of whole numbered things now. But what if we want to repeat addition multiple times to compress on-paper work? Well, you get multiplication, with its corresponding inverse, division. Now you can represent numbers which are only parts of a whole object, the rationals! And repeated multiplication? Exponentiation, along with roots (which are still really just exponentiation) and logs! Thanks to being able to place our earlier-discovered numbers into the operands of roots and logs, we find two new things: irrationals and complex numbers! But this brings up the question- if all of these numbers are simply the result of extending the input domain of inverses of repeated incrementation, then could there be a new type of number originating from the inverse of tetration? Perhaps transcendentals become possible to evaluate naturally? Maybe a natural implementation of quaternions without just assuming that such an extension already exists? Or perhaps something entirely different, that we can't even begin to understand the purpose of until we discover it?
The problem with your analysis is that it is just inaccurate. The complex numbers and the real numbers are not mathematical structures defined in terms of "inverse operations" applied to other mathematical structures. Mathematical structures, in reality, are defined in terms of axioms they satisfy.
@@angelmendez-rivera351 I'm not necessarily saying they're "defined" in terms of these inverse functions, but rather that they can be used to extend the domain and range of their corresponding functions. If it were absolutely the fact that complex numbers did ot exist, then Y = the square root of X would not exist in the domain X
@@GameJam230 *I'm not saying they're "defined" in terms of these inverse functions...*
Uh, no, this definitely is what you said. I know how to read. I quote: "After all, almost every new type of number was created as a result of an inverse operation derived from simple unary operations."
*Assume we only have the number 1. With the unary operation NOT, you can find 0.*
No, this is not how the NOT operation works. The NOT operation is an operation in Boolean logic, it does not apply to natural numbers, much less any extensions thereof.
*Now, with these two numbers alone, it opens up the ability to identify truth or falsities.*
Sure, you can map TRUE to 1 and FALSE to 0, but this is not the system of natural numbers, nor can you construct the system of natural numbers from this.
*Addition is simply repeated incrementation, another unary operation,...*
No, this is factually incorrect. In Boolean logic, addition is defined by 1 + 1 = 0 and 0 + P = P + 0 = P. There is no incrementation operation in Boolean logic.
*Now that you have the ability to unlick all whole numbers, you can count things!*
The only numbers you can unlock in Boolean logic are 0, 1, and these are not natural numbers, but truth-values. You cannot construct the natural numbers from this structure alone. This is why, in set theory, the set of natural numbers is said to exist axiomatically. In abstract algebra, the natural numbers are, again, defined axiomatically. They are never defined in terms of Boolean logic. Also, if you want to count, then you need the class of cardinal numbers, not just the set of natural numbers.
*But subtraction brings up the question- what happens when you subtract more than you have? Well, you get the negative numbers!*
See, you are doing the thing you said you were not doing: claiming that the various number system extensions are created via inverse operations. The integers are actually defined in the context of group theory, not in terms of subtraction. The same applies for all other number systems. They are defined by axiomatizing mathematical structures. The fact you can define new operations on extensions of previous structures is purely a coincidence (since it does not always work).
*But this brings up the question- if all of these numbers are simply the result of extending the input domain of inverses of repeated incrementation,...*
See? You did it again.
*...then could there be a new type of number originating from the inverse of tetration?*
It is conceivable that there may be some axiomatizable mathematical structures which can be axiomatized to be extensions of current structures we use, AND such that tetration, when formally axiomatized, happens to be both surjective and injective in them. However, it being conceivable does not mean it is possible. The research in abstract algebra with regards to tetration is very minimal, but what little does exist, it does not support the idea that an extension of these mathematical structures with the desired properties exists.
@@angelmendez-rivera351 Okay, yes, I phrased a few things poorly to how I actually meant them, but I'd like to point out how one of your first points was complaining that I was using boolean operations in the natural numbers, and then the next 3 points were complaining that (and how) I was using arithmetic operations in the boolean system, which is just contradictory. Did you ever consider that we can talk about MORE than one system of logic at the same time, and that maybe, JUST MAYBE, they are all in some large way connected?
Point is, I just came here to explain a neat thought I had, and you decided it wasn't good enough, so I tried to explain what I actually meant, and your first thought is "Yup, I should pick apart absolutely every quote I can from this man's speech and completely shit on anything he's ever dared to think of".
People like you are the reason why I will never enjoy math enough to actually consider doing it in school- because there's no room anywhere to merely SUGGEST AN IDEA or ASK A THOUGHT-PROVOKING QUESTION unless every word of it means EXACTLY what it says, instead of the listener being able to apply a little common fucking sense and consider what the speaker actually MEANS.
That has been a mind-boggling question of mine for two years now and i've come up with two inverses of tetration: tetration-root and tetration-logarithm, as the functions that give the base or height respectively. I've found no extensions beyond natural numbers so far
Great video! You can see a lot of effort was put into it, I hope to see more in the future!
That's crazy, I thought generalizing tetration was impossible !
Great video !
I really hope that all the mathetmaticians agree on expanding this marvellous monster operation and getting inspiration from this video! Congrats for this great video! 👏👏👏
the dream ive always had is there to be some way to generalize hyperoperations fully, ideally to the complex plane: imagine, the X-ation(Y, Z)
Ong a verified user
there is, theres like a plot on wikipedia but i havent been able to find the specifications
I love that a bunch of new math channels arise from SoME’s. Your’s is one I’ll follow.
Thinking about repeated exponentiation started the chain of events which eventually led to my mathematics obsession, and I'm now in my last semester of undergrad in maths! Cool to see a video on this topic
Anyone that writes math code in C++ instead of Python deserves kudos and a sub from me!
C++ and FreePascal (which I use) are the best. By the way, it's very satisfying to implement your own Math library from scratch. I had to do one, when I realized that complex numbers implementation was awful in FP, so I implemented them myself, along with all math stuff up to Lambert W function. It was fun and I learned a lot. And I admire if someone does its math job using programming, and doesn't use things like Wolfram Mathematica.
Might mention that addition, the level one operation, can be considered iterated counting, or adding 1 repeatedly. 2+2 is 2, 3, 4. This makes counting the level zero operation ,which is an interesting parallel to exponents of zero returning 1.
this is honestly one of the best some3 videos so far
I like a lot this video, and I think this channel will be a great channel of maths. I wish you the best and I be waiting more content like this. Continue like this!!
Can the ‘hyperoperations’ themselves be generalised. I.e if addition is the first hyperoperation, multiplication the second, is there any meaning to a 1.5th hyperoperation?
Why don't you take a look into the question and tell us what you find. 😛
Seriously, I wonder the same thing.
If you define h(n, a, b) := the nth operation applied to a and b, so that for example h(2, 3, 4) = 3 * 4 = 12 and h(4, 2, 2) = 2 ↑↑ 2 = 16, then can you somehow evaluate h(1.5, 2, 3)?
From what I've gathered, h(n, 2, 2) should be 4 for all values of n; that might be a useful clue.
Come to think of it, you might be able to use a technique similar to @tetrolith to get an approximate value.
I found a blogspot post by user paurea called “Supersum? Subproduct?” discussing this exact problem (operation 1.5), and they used Lie algebra stuff or something (didn’t understand that part lol)
h(1.5, 2, 3)=5.65
2+3=5
3*2=6
I tried solving the equations at 20:25 analytically. Yeah... not gonna' happen. However, after solving these numerically, _I got different results_ from what you have:
a≈0.480784, b≈1.48769, c≈-0.848109, giving a+b+c = 1.12037
a≈0.496487, b≈0.845326, c≈0.340039, giving a+b+c = 1.68185
There are six additional complex-valued triplets.
I did plug these triplets back into the original three functions and verified they do indeed work. Therefore, I don't know where the problem is. It could be the equations were transcribed incorrectly, but the following looks right:
a+ab+(a^2)c=1, b^2+2abc=1, bc+2a(c^2)+b(c^2)=1/2
I don't have the time to redo your derivation today, but I could look into it later this week if enough people want me to.
i want you to
DO IT! DO IT! DO IT!
Yeah, turns out you have to solve septic (degree 7) equations...
I tried plugging in your values myself and I found your problem. Your a, b, and c satisfy bc + 2ac² + bc² instead of bc + 2ac² + b²c (note the last term!) Maybe you could retry your calculations haha
@@redpepper74 THANK YOU. This was driving me insane. I'll poke at it again when I have time.
The third equation onscreen at 14:32 can be derived from the top and bottom equations. The other three equations are the three axioms of function iteration.
The way of orginal prezentation is more transparent
It would be a great undertaking to find the set of taylor coefficients as analytic functions of n in R.
Man, this is so funny. I spent months working on the same problem and we both took the same approach with truncating the Taylor series and using software to calculate the coefficients XD. The best my software could do was an 18th order approximation, but I soon realized that there's actually multiple solutions for the coefficients, which gave me doubts (in fact, there is a continuum of solutions for the full series expansion). I'm sure you're aware of the many tetration forums which use more advanced methods (that go way over my head), but fascinatingly, no one appears to know what the "correct" analytic solution is yet. I am amazed that this is still an active area of research. I will bow down to whoever can find a nice formula for tetration over non-integers, be it the coefficients of the taylor series, or even an integral like the gamma function. Thanks for bringing more awareness to this problem. Great video!
And if anyone wants to go down a rabbit hole like me, you'll find echoes of someone named "Kneser" who apparently beholds the (unproven) but widely-believed-to-be canonical solution for tetration.
One last note, if Kneser's solution is valid, Kouznetsov proved that it is the unique solution!
@@yamsoxDo you have a source showing that Kneser's solution is unproven? Every source I can find claims otherwise
@@angelmendez-rivera351 Sorry, I probably am wrong about that. It was my understanding that his solution was proven to be the unique solution, if it is indeed a solution. Perhaps it was also proven that the solution is valid as well?
@@yamsox myweb.astate.edu/wpaulsen/tetration2.pdf
It was indeed proven
I wonder how unique the prescription f(f(x)) = e^x at x = 1 for e tetrated 1/2-times is. If I just take a simpler example, f(f(x)) = x doesn't give unique f(x), because some possible solutions are x, -x, 1/x, -1/x. But the two x, 1/x give the same value at x = 1. I am not sure whether f(f(x)) = e^x only implies a unique solution, or multiple solutions and whether those coincide at x = 1. More generally, what are the solutions for f(f(x)) = g(x) for a known g? I assume with more and more repeated tetration, the number of possible solutions might grow, so I think we're talking about some "principal solution" here, defined via the Taylor series.
I can imagine how to extend this to rational numbers, p/q: first, find 1/q from f(f(f(...f(x)...))) = e^x (left-hand side is nested q-times), then take the resulting function of x, f(x) and nest it p-times, f(f(...f(x)...)) (p-times) and plug in x = 1. Real numbers would probably work the same, find an approximation for r ~ p/q, call the result the approximation for e tetrated r-times. Question: how do we know that this (rather opaque and complicated process) results in a smooth function, i.e. if p1/q1 and p2/q2 are somewhat close, are the tetrated results somewhat close (in a continuous sense)? Only then it makes sense to extend it to the positive reals.
Finally, any idea on how to extend to complex numbers? e tetrated i times, anyone? ;))
We cant go further to the negative number tetrations ? That was my doubt brother. Also you are absolute genius because you are the only one ive seen do it in this huge platform bro , you deserve more support , thank you man!
Actually, there are 2 ways to tetrate with negative number exponents, using either opposite of exponentiation.
In one of the Ramanujan's Lost-Notebooks you would find a general efficient method to compute taylor expansion of iterates of f(x) = exp(x) - 1 (hence discovering Bell numbers long before Bell).
I'm not sure but I guess he was interested in generalizing those coefficients to non-integral values.
Can you give something like notebook and page number? I have done inefficient ways of fractional iteration of g(x) = ln(x + 1) (which you can then combine with integer steps of your f(x) to get fractional iterates of f). Btw in the C and Python math libraries, f = expm1 and g = log1p.
Wow- what a brilliant young man. This is the best math video I have ever heard, and those graphics - another level!!!!!
0:24 - 0:42 No, this would be incorrect. If you apply the operation 3 times, then you have 2 (+ 2) (+ 2) (+ 2) = 2 + 2 + 2 + 2 = 2•4. To put it differently, +(2, +(2, +(2, 2))) = •(2, 4). The 2 appears 3 times in the sum denoted by 2•3, but the + operation only appears 2 times. Also, the idea that multiplication is "repeated addition" is also just incorrect in general, and is only legitimate when working specifically with natural numbers. It does not hold when multiplying rational numbers (e.g., 2/5•3/7), real numbers (e.g., e•π), complex numbers (e.g., (2 + i)•(1 + 5i)), matrices, functions, or any other type of mathematical object. The actual definition of multiplication is that it is some binary operation which distributes over addition. In general, there actually are multiple such operations, so you need to specify which multiplication you are working with.
0:53 - 1:03 This is the same mistake as earlier. 2(•2)(•2)(•2) ≠ 2^3, but 2(•2)(•2) = 2^3. The operation occurs 2 times, not 3 times. The exponent tells you the number of "copies" or "ocurrences" of the number being multiplied, not of the operation itself. This mistake could easily be avoided if you said instead "2•3 is 2 added to 0 exactly 3 times" and "2^3 is 2 multiplied to 1 exactly 3 times." In general, you define a function f[a](x) = a#x, where # is some arbitrary binary operation, so f[a](1) = a, and in general, (f[a]^n)(1) = a%n, where % is a new binary operation, the "repeated version of #", where f^n is the nth iterate of f. Notice, though, that this is only well-defined for natural numbers n.
2:20 - 2:28 This is the same mistake again. The correct definition is given by defining g(m) = 2^m, and then saying 2^^3 = g(g(g(1))). Notice how g is applied 3 times.
4:06 - 4:16 This is not the commutative property, because this property, while true, is not named "the commutative property." For a given binary operation #, the commutative property for said operation states that x#y = y#x. If # = ^, then the commutative property says that x^y = y^x. This is clearly untrue: 2^3 = 8, but 3^2 = 9.
4:38 - 4:40 This is also incorrect, as exponentiation does not distribute over multiplication. Yes, (x•y)^n = (x^n)•(y^n), but one does not have x^(m•n) = (x^m)•(x^n). Instead, one has x^(m + n) = (x^m)•(x^n). Therefore, ^ does not distribute over •.
6:08 - 6:10 Well, hold on. x^0 ≠ x/x in general. You are assuming x^(m - n) = x^m/x^n prior to proving it. If you want to solve x = x•x^0 for x^0, in the assumption that the equation holds for all x (which is a requirement), then note that this therefore means x^0 = 1, solely because x = x•1 for all x. Therefore, x^0 = 1 holds true, even for those x for which x/x is not well-defined. This is important when x is, for example, a function or a matrix.
6:10 - 6:32 A more natural way of having handled this would have been to prove x^(m + n) = (x^m)•(x^n) via the recursion, and the insisting that m, n should be allowed to be arbitrary integers, and not merely natural numbers.
8:17 - 8:21 This rule cannot work unless you restrict x to be a real number, and specifically, x >= 0. I suppose you were going to do this restriction anyway, but noting this is important: the extension you are about to attempt can never possibly work for arbitrary x.
9:36 - 9:39 Again, calling it the commutative property is incorrect, and in fact, misleading.
12:03 - 12:24 This was unnecessarily complicated. All you need to do is realize that applying a unary function 0 times is the same as doing nothing, which is the same as applying the identity function. This is more natural, more intuitive, and is not prone to the erroneous recursion solving you did earlier in the video, and which you almost did again here.
13:23 - 13:29 The criticisms to the previous section apply here as well.
14:03 - 14:11 And this is the reason I criticized your approach earlier. Your argument relies on the function A being invertible here, yet the 0th iterate of a function is always the identity function, regardless of whether it is invertible or not. Only the negative iterates should actually depend on the existence of an inverse.
15:47 - 15:55 Right, so the problem here is that ln(-1) and ln(-2) are not well-defined (some people would say it is multivalued, but that is just a fancier way of saying "not well-defined"). When it comes to defining the iterates of a function, the domain and codomain (and range) are very much relevant. I mean, to begin with, function composition is only well-defined when you take the domain and codomain (and range) into account. In this case, the exponential function exp has domain R, but the range is (0, ∞), not R. Therefore, you will run into problems if you are not careful about this. In general, exp^n has range (e^^(n - 1), ∞) when n > 0. The range of the identity function is R, but the inverse function of exp, ln, does not have R as its domain, but rather (0, ∞) as its domain, and R as its range. Further negative iterates have the domain even more restricted: (e^^(n - 1), ∞) is the domain of ln^n = exp^(-n). So, in your column for n = -1, everything above x = 1 should be empty.
16:08 - 16:12 No, they are not well-defined.
16:12 - 16:19 If you are going to avoid the topic, then I would recommend that you avoid making a factually incorrect statement on said topic, even if it is meant to make it "easier" to understand. It would be even easier to understand if you had simply left those slots on the table empty.
19:49 - 20:00 This is problematic. You are assuming that, by replacing exp by its second-degree Maclaurin polynomial approximation, and that by assisting that f also be a second-degree polynomial, that this polynomial will indeed be a suitable approximation for the exact solution to the equation f°f = exp. However, this methodology is false in general, so it is important that you do prove that it works in this specific case. In fact, this is probably the most important section of your entire video, yet you just skipped it entirely, and just took for granted that it works. I know you are doing this for illustration purposes, but your video has not made it clear that what you are doing would normally require justification at all, nor does it clarify that you are indeed making an assumption for the sake of simplicity. As such, there are going to be plenty of viewers who will watch this part of the video, and take whatever the result here is, and accept it as irrefutable fact.
The greatest mistake here, though, is that you are assuming that f°f = exp is an equation with a unique solution, but this is not the case: the set of all solutions to this equation is uncountably infinite. As such, exp^(1/2)(1) is just not well-defined at all. You mentioned the existence of an "accepted" value for e^^(1/2) (which is debatable), but this accepted value definitely does not come from solving this equation over the real numbers, so the fact that you neglected to mention this is a huge problem.
----------
Overall, I can see you did put in a lot of effort into the video. The visual presentation was simple but effective, and I do appreciate that you kept the video rather to the point without going off-topic unnecessarily. That being said, the quality of your videos would improve drastically if you avoided using mathematical terminology incorrectly, as that achieves nothing except misinform people, even if this is not your intention. Also, I think you need to make the purposes of what you are bringing up at any given time in the video clearer, so that assumptions meant for the sake of simplicity are not treated by viewers as facts (which is a problem I have observed in the comments section).
Keep it up!
Thanks for the feedback! There is definitely lots of room for improvement. Most of the things you pointed out were done in order to make the video more accessible (for example, "defining" multiplication as repeated addition, as this concept was helpful later on when looking at repeated operations). There are a ton of formalities I totally skipped over for time and to maintain the flow of the video. My erroneous use of the term "commutative property" is inexcusable, though!
Again, thanks for the valuable feedback!
I think % is modulo
@@Tetrolith-ko5yuIf it can interests you, I found thanks to someone an exact formula (limit) to calculate tetration for much larger values!
Even in the complex plane (complex hight in some cases).
Technically, it has already been found (Schroeder and Koenig), but if you're interested, tell me!
1:11 Can we also do this backwards? i.e. Express addition itself as doing something else repeatedly?
Thats called zeration or hyper0 operator
@@returndislikes6906 So.............basically doing nothing? 😅
@@feynstein1004 its logical operator. it is similar to max function
@@returndislikes6906 Hmm I don't understand
Over the past few month I am investigating Greham's number, and I wonder if there is a way to generalise Knuth's up-arrow notation with non integer. I wonder if this method could be generalised to a↑↑↑b where a and b are non-integer.
I mean, we don’t even know if pi^4 is an integer or not, so extending pentation to the reals is almost certainly not something we can do either.
IVE HEARD OF THIS BEFORE BUT THEN FORGOT WHAT IT WAS THANK YOU SOOOOOOOOO MUCH FOR REMINDING ME
I wrote an entire paper about "nth compositional roots of functions" which was your question on f(f(x))=e^x. In the paper, I proved under what conditions a one-to-one function has a nth compositional root. Or, in your terminology, given some g(x), when does f(x) exist such that n nested functions of f(x) = g(x).
Nice video , This was a very great way to teach tetration and generalization. Hope you upload more later
I take issue with the way you showed there is no distributive property. 4:43
when talking about the distributive property of multiplication, we use addition. 4(x+y)=4(x)+4(y)
when talking about the distributive property of exponentiation, we use multiplication. (xy)^2=(x^2)(y^2)
so why are you using multiplication when talking about the distributive property of tetration? shouldn't you instead be using exponentiation?
I would like to know if the distributive property does exist or not in this way, but I guess I can look that up in my own time.
Great!! Next is ... pentation?
Yep
recursion
@@DeadJDonanested recursion?
2↑↑↑3=65536
3↑↑↑3≈7*10^12
4↑↑↑3=ERR
@@stickman_lore_official6928
2^^^3 = 65536
3^^^3 = 3^^(3^^3)
4^^^3 = 4^^4^^4 = _ERROR
What about tetration with complex numbers on the power?
Madman
Approximate a bunch of values with close inputs (ex. b tetrated by 0.01, b tetrated by 0.02, ... b tetrated by 0.1), and use those values to approximate the nth derivatives of ˣb at x = 0. Then use THOSE values to approximate the coefficients of the maclaurin series for ˣb, and plug in any complex number you'd like. Theoretically speaking it should work, though it would be extremely inefficient to try, and would take a while just to get the first digits right.
This is amazing! I first learned about tetrations last year. I never knew tetrations can go that far! Maybe there could be a possible operation of repeated tetrations that can go beyond our knowledge!
yup, there is! pentation :)
What is we can go beyond pentation? Hyperpentation?
@@clueless3773Hexation
You can just go as many nth-ation you want
@@clueless3773late to the party, but next goes hexation heptation octation... Etc
11:00 It's function recomposition. Another notation would be $(x + 1)\overset{3}\circ x$, being $\circ$ the composition operator, and the reading as "passing $x$ through the $(x + 1)$ function $3$ times".
With your polynomial approximation, it seems that it is most accurate around x=0,
since f(f(1)) = exp(1)
that also means f(f(0)) = 1
and f(0) is e tetrated -1/2 times
So exp(f(0)) is e tetrated 1/2 times
In your 2nd degree polynomial, f(0) is 0.4979, which gives exp(0.4979) = 1.64526
Which is already a lot closer to the actual result.
You are so smart. I have conviction that this is very important in math.
Is using 3 parameters to the "exp" function (a superscript, a subscript, and a regular parameter) standard? I didn't know what this meant and the internet didn't help.
Also, is there any way that non-integer hyperoperators (eg the 1.5th hyperoperator between addition and multiplication) make sense? :)
I'm no slouch on reasonable pure mathematics, but I can't claim to be an explorer of the outer edges. So I have a question.
Aside from perhaps a few instances of simplified notation, are there any applied uses for tetration? Or theoretically, quadration, pentration, sextration, etc? Not that I'm dismissing the field - there is validity in considering the nature of iteration of iteration. I'm just curious to know.
I believe that the only true uses of tetration are to serve as a benchmark in fractional iteration, and to expand human curiosity.
Hmm this makes me wonder. Is there an analog for e in the case of multiplication or addition, tetration too, and so on?
Finally, i have been looking for something like this
Loved that video. Now I'm thinking instead of 1/2, what about 1/3 and 2/3, and p/q? And what about 1/e or 1/pi? Could we extend to complex? Like i?
According to this logic 1/3 tetration of e would mean we need to find f(x) such that f(f(f(x))) = e^x and then do f(x), and 2/3 would mean we do f(f(x)). 1/pi and 1/e tetration and anything like that are the limits of approaching those numbers from rational numbers (a lot like taking something to the power of an irrational number). Unfortunately using the Taylor Series method in the video to find even a quadratic approximation of f(x) such that f(f(f(x))) = e^x gives formulas that are too complicated for any website I found to be able to solve.
News: after tampering around in desmos, I found an approximation for a quadratic formula that when iterated three times f(f(f(x))) is VERY close to e^x (at least for small values), here it is: 0.2x^2+0.81x+0.4
This makes e^^1/3 approximately equal to 1.41 and e^^2/3 approximately equal to 1.94
7:01 I feel like this should also be provable by how x^0 is 1 with how you divide x^1 by x^1 to get x^0, but it doesn’t hold up
Passionnant !
Dommage que je ne comprenne pas complètement l'anglais, mais les mathématiques sont une langue universelle et j'ai pu suivre le raisonnement... mais c'est une vidéo à revoir et réécouter pour bien comprendre.
MERCI pour ces démonstrations.
3:24 It's neat how the little harmless-looking number ³4 is greater than the number of subatomic particles in the entire Universe.
Much much much greater
@14:46 I don't see how you can get 5a(x) and not 6a(x).
If 3a(x) = a(a(a(x))), then 2a(3a(x)) = 2a( a(a(a(x))) ) = a(a(a( a(a(a(x))) ))) = 6a(x)
i mean he’s basing it off a different rule he found earlier in the video, that being ma(na(x))=(m+n)a(x)
therefore making 2a(3a(x))=5a(x)
Is there any kind of justification that cutting of the Taylor series at some point and then nesting it converges as you cut off later and later?
I don't have one, but I did notice that for increasing the degree of the approximation, the coefficients of the Taylor approximations of f(x) tended to converge, if that counts for anything!
I'm enjoying this, but it grates that the video makes the 'fence post error' right out of the gate and then keeps making it.
'2 x 3' does not mean 'add 2 to itself 3 times', it means 'add 2 to itself twice'.
It's the number that shows up n times, whereas the operation shows up (n-1) times.
Yes, and this is all due to a lack of rigor and a poor understanding of the language. These small details are ultimately important, and they tell me a lot about how much one should trust a mathematics communicator on UA-cam.
In fairness to the creator of this video, I think that other than that aspect, it's very good. Also, SoME (which is the context for this video) is a superb initiative that's supposed to be about new maths popularisers trying new things, and that will inevitably involve some stumbles.
So I salute the creator, and I hope he reads this and is not discouraged. Furthermore, I am NOT particularly trained or knowledgeble about any of this stuff, I am very much a stumbling amateur myself. So what follows may be wrong.
But having reflected on it some more after watching this video, I think that instead of '2 x 3 means add 2 to itself twice', the best way of expessing it MIGHT be to say '2 x 3 means add 2 to THE IDENTITY UNDER ADDITION three times'.
In other words, '2 x 3 means take ZERO and add 2 to that, doing this 3 times in succession'. Then '2 ^ 3 means take ONE and multiply that by 2, doing this 3 times in succession'.
If you try to extend this to tetration, you will hit difficulties, because there isn't (straightforwardly) an identity under exponentiation. But then if you explore that fact, I think it takes you into similar territory to the ideas explored in the rest of this video.
Wow I have thought that you have 700k subs you are VERY underrated... You have a gained my sub, nice work!
Because we want to evaluate at x=1,shouldnt we take the Taylor series on X=1 instead of X=0? I mean the error with exp(X=1) of the Taylor series is 0.21 over 2.71... The algebra part would then be the same I guess, using X+1 instead of X everywhere? I haven't done it so I don't know if it would end up giving the same answer TBH, but I sounds more correct to me to do it on X=1...
Last night, I stumbled upon the concept of commutative and fractional hyperoperations, and of course I get recommended this today! Great explanation of tetration! Are you at all familiar with commutative exponentiation and the like? It looks like f(a,b) = e^(ln(a)*ln(b))
In exponentiation positive power shows multiplication and negative power shows division.
opposite of positive is negative just like opposite of multiplication is division , so
In Tetration positive power shows exponentiation and negative power shows logarithm(log)
Can you do pentation pls?
Idk anything about pentation
That was super interesting, the idea of going back to nested functions' proprieties to grasp T(e,1/2) is great !
I was stuck thinking that f(x) = T(x, 1/2) was such that T(f(x),2) = x (which makes f(x) = ln(x)/W(ln(x)) ) but your way seems more convincing 😂
the only thing left is… to find an expression of T(e,x), or really T(x,y) for x,y > 0…
I have an idea of something silly involving partial derivatives where the function (T(e,y))^x would arise, tell me if you're interested to see it x')
show us!
well @@lunaticluna9071, have you heard about fractional derivatives (if not just check wikipedia or youtube) ? My idea is simple : instead of taking the α-th derivative for a given α, it could be interesting to take the partial derivative of (f^(α))(x) with respect to α, making it a sort of "2nd order derivative" of f. But then really we could derive this with respect to alpha a second time, or any integer number of times, or even any real number of times using the fractional derivatives again. Then if we derived β times with respect to alpha, we can now derive with respect to β to have the 3rd-order derivative, and so on and so forth…
And here we face something that is reminding of hyperoperations but applied to functions (let's say analytical functions at first, easier to deal with). The funny thing is that for any n, the function whose n-th-order derivative is itself (and such that f(0)=1) seams to be (T(e,n))^x (x being a simple exponent)… But now, let's say we managed to find an easy way to deal with n-th-order derivatives for any n, there might be a way to get the α-th-order derivative (again, alpha a positive real number), Ik I'm going kinda fast here, but logically we should find that the function whose n-th-order derivative is itself would be (T(e,α))^x
but yeah hard to dive into the details in a youtube comment x')
Why do you set the coefficients of the different powers of x to 1, 1 and 1/2 at 20:33?
chemists been doing acid base tetration for years, its about time math caught up
Where do you get the "accepted value"?
This was a neat an concise video. Thanks for posting.
I kept getting confused as to how 2 to 3 tetrated = 16 then how 2 to 4 tetrated was 65536 but after watching this I gained the proper knowledge on how to preform the function, by simply going down the tower of power I go say 2^2 (for the very top being used to use exponentiation the part of the tower 1 down ) which is 4 then it goes down to next 2 making it 2^4 which is 16 then since no part of the tower remains its just the original value of 2^16 which = 65536, god I love when I finally understand math :D
Can you do tetration tower like
³
³
3
Or
Pentation
3
³
What happens if you use Pade Approximants instead of Taylor Series? Your error should decrease considerably.
Repeated counting -----> Addition
(The zeroth repeating operation)
5:22 I haven’t watched the video Ye this but I already know a short way. I start with the example x tetrated by 3, this is he same as x to the x to the x. In general x tetrated by n is x to the x n/2 times. now I’ll take the natural log of x tetrated to 3, this is the same as the natural log of x to the x to the x. We multiply the powers to get the natural log of x to the x times x, which is the same as the natural log of x to the x squared. Now a property of logarithms is that the log (including natural log) of x to some power is the number in the exponent section times the log of x. So we fix our equation as x squared times the natural log of x. In our equation we can now say x tetrated to n is the same as x to the n-1 time the natural log of x. And it’s reasonable (and true) to assume this works for all numbers. Therefore we just take e to the power of this formula and we have an equation for tetration.
I've heard of repeated exponentiation! I don't have a name for it but my first gander into that realm is learning about Graham's number.
This video was amazing! I've looked into this same topic a while back and I wish I could have found a method as creative as yours. Although I have one question, where did you find the value for e^^1/2 that you showed at 21:46 ? I remember scouring through the internet for stuff about tetration and I've seen no mention of that.
en.citizendium.org/wiki/tetration#Taylor_expansion_at_zero lists the coefficients of a Taylor series centered at 0 for the xth tetration of e. I used the Taylor series to get a value for the 1/2 tetration of e up to enough precision where it disagreed with my approximation. I've listed the link in the comments now!
Thanks for the video, it was very enjoyable to watch! I’m a bit confused on the explanation at 21:03. The 3 equations are the same as what I got when expanding, however using wolfram I got answers (a≈0.480784, b≈1.48769, c≈-0.848109) or (a≈0.496487, b≈0.845326, c≈0.340039).
But my main point was that none of these 3 solutions look the same if you plot f(f(x)) against the original polynomial, which I doubt is a problem the arises from a lack of arithmetic precision. Could it be from the fact that you assume that f(x) is a 2-order polynomial instead of, say, a polynomial with fractional powers?
Additionally, how come you (seemingly) brush off the coefficients of higher power? To me, it seems like they’d serve as useful additional restrictions if they’re set equal to 0. Addmittedly I don’t know much about how functional square roots are calculated, so is this a standard technique when solving for them?
Of course, I plugged the SOE with more restrictions into wolfram aswell, who claimed there were no solutions
Thanks to your comment, I just noticed there is a typo in that slide! You'll see that the line at the top has b^2*c, while the line underneath (which I'm assuming you used) has b*c^2. The top line is correct, and you should get the solutions given in the video.
However, the systems of equations do (mostly) have multiple solutions. For these, I picked the solutions that had coefficients closest to 0 and the ones which had coefficients similar to those of previous degrees (yes, I know it's extremely informal!) I'm not certain what the other solutions could represent or if they give the same value for the 1/2 tetration of e - another fascinating extension of this problem I didn't have time to include!
The main reason I discarded the higher power coefficients was because, in a sense, not enough information about those degrees is known. I tested out these systems of equations as well, and part of the reason could be that introducing these restrictions gives 5 equations for 3 unknowns, essentially overlimiting the solution possibilities. The fact of the matter is that the coefficients of the Taylor series for e^x are NOT equal to 0, but they're not specified in the approximation, so I see these higher coefficients similar to how information is thrown away when dealing with calculations with significant digits.
Thanks for the great comment!
@@Tetrolith-ko5yuthanks for your reply! I just checked the calculations with the correct equations, and indeed got the same answer as you did. And thank you for clarifying your process for eliminationg the higher order coefficients!
However, i’m still a little confused about how ”bad” of an approximation f(f(x)) is for p(x)=1+x+1/2x^2. I know that you talked about how it compares to e^x, but intuitively it feels like f(f(x)) should at least be able to approximate p(x) quite well, whereas it only does that near 0. Perhaps there could be a non-polynomial solution that works as a better approximation? Maybe i’m missing something obvious
Can you do the same video to Pentation ? Like using Real Numbers in Pentation
Great Video. Hope you post more, but I do wonder how one would generalize to the irrational numbers.
Nice video, I was hoping to find a generalised function that could give us the irrational tetration of any number at all say 1/2 tetration x. Care to share links I could find such an identity
Damn, really interesting. Quite elegant way of generalizing.
Well, what about the binary operation which repeated gives you addition?
That's incrementation, which is to say, "counting up". 2 + 3 means, from 2, count up 3 times: 2... 3... 4... 5.
@@RollcageTV is it a binary operation?
@@ILSCDF It's a unary operator, usually called the successor S(x) (see Peano axioms). But in the hyperoperations family, for consistency, it is usually defined as H_0(a, b) = b + 1, where the first operand "a" is ignored.
@@kikones34 That does feel forced. Is it really a binary operation if the first operand doesn't influence the operation?
@@ILSCDF I'd consider it a degenerate case of a binary function at most. The Wikipedia article on hyperoperations seems to agree: "Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument."
could you do ⁻²x with complex numbers?
x^^-2 is -∞, because ln(0)=-∞
What if we can do (complex number)^^(complex number)???
i^i≈0.2078
@@stickman_lore_official6928 I meant Tetration with complex numbers like i^^i.
@@abdul-muqeet i^^i=0.6552811790031976-0.99917222588218111956172i
I wonder what this answer is algebrically,if it means the tetration equivalent of the square root or not...how does it talk to other operations...
i looked at the sub counter, and it was soobvious to me that this channel had to be well-known seen the quality that my brain added a k after the 343
Very impressive! One might say, stupendous!
It would be intresting to next analize it’s derivative and integral
How would fractional and complex tetration look like?
Why do you say that 2 X 2 X 2 involves three operations instead of just two? Clearly the OPERATION involved , [i.e multiplication], has only been applied twice. And the notation 2^2 = 4 says, take 2 times itself TWO, not three, times.
Tetration is not repeated exponentiation as it is expressed, atleast not on the classical sense, see repeated exponentiation can be viewed in 2 ways, one of them is repeated exponentiation to the base which would look something like this:
((2^2)^2)^2)... N times = 2^(2^N)
and repeated exponentiation into the exponents, which would look like standar tetration.
because on the operations of lower order ( multiplication, addition) the order of the terms doesnt affect the result, this distiction of repeating an operation into the "base" or into the other terms give you the same result, meaning that their iterations (operation of higher order) can be expressed into a solely operation, this breaks down for operations higher to exponentiation, You cannot define the iteration of exponentiation (tetration) as a single operation for this reason.
Wow! This video was very insightful. I went in looking to further my knowledge on tetration, and I at least feel like I know more about it lol.
Trips me up a bit that iterated composition (composition as in (f;g)(x) = g(f(x)) ) of a function with itself is shown here with a left subscript when I'm used to seeing it written for example as (f;f;...;f)(x) = f(f(...f(x))) = f^n(x). Otherwise this is a fantastic bit on the subject of power towers!
I can't even begin to wonder what a complex tetration would look like
amazing video dude, keep up the good work!
2x3 is two times, not three. You add 2 to itself once to get 4, then twice to get 6.
It's weird.
Think of it like, M(2, 3) = ₃A(0, 2)
(Where M(x, y) = xy and A(x, y) = x + y)
Why is tetration 2^(2^2) and not (2^2)^2 ? The latter would make more sense to me in the sense of hyperoperations, any insights would be helpful. Thanks!
Exponents are always done first due to the order of operations, so you have to start at the top.
They are two different operations. 2^(2^2) is right tetration while (2^2)^2 is left tetration. Standard tetration was defined to be made on the right because of exponentiation notation 2^2^2 = 2^(2^2).
Left tetration is "boring" because (a^b)^c is the same as a^(bc), so right tetration was chosen.
Others have suggested chasing down *irrational* tetrations ...
Or checking out *repeated* tetrations ...
but how about trying to establish if *complex* tetrations (involving roots of negative numbers) are possible ...
I don't mean actually doing them:
just see if:
they can be proven to be possible or:
they can be proven to be impossible or:
do both of these proofs occur simultaneously or:
are they impossible to prove with our current understanding of Maths.
If you accept that a Taylor series that produces arbitrarily accurate values is a good enough solution, then you can plug in complex numbers, square matrices, and so on into the Taylor series.
We could generalise tetration to rational numbers.
But how about generalisation of errr... Well, if f(a,b,1)=a+b, f(a,b,2)= a*b, f(a,b,3)=a^b, then f(2,3,3.5)=?
2^3=8
2↑↑3=16
f(2,3,3.5)=10.67754014777842554227 (20 decimal places)
@@stickman_lore_official6928How?
@@stickman_lore_official6928 And what would f(2,3,3.01) or f(5,3,2.6)
You probably know this but x↑↑(1/a) is the inverse function of x↑↑a or x↑↑(1/2) is the inverse of x↑↑2 which is x^x which we know the invese of: e^LambertW(ln(x))
Computing e^W(ln(e)) gives 1.76322283435189671022... and that doesn't seem to match the value shown in the video so I don't think that's how it works
no this doesn't make sense, cos its based on (a^b)^c = a^(bc) which isn't true for tetration.
@@Gears_AndGrindsx^x=y xln(x)=ln(y) xln(x)=ln(x)e^ln(x) ln(x)e^ln(x)=ln(y) ln(x)=w(ln(y)) and e^w(ln(y)=x so he is right
@@amr0733 You are correct about it being the inverse function of x^x however that isn't the same thing as x^^1/2. Read the reply above
22:49 you say 1, do you not mean 0?
Thank you 🙏
This can easily solve tetra roots and tetra logs!?! 👏👏👏👍👍👍👌👌👌
But do u know theré also exist quadration
I have hard time figuring what quintation would look like though 😮
@@adumont (ꏿ﹏ꏿ;)
@@adumontpentation or repeated tetration
3
³
=
³
³
3
=
7625597484987
3
=
³.....
³
³
³
³
³
³
3
=
3 exponented to itself 725597484987 times
This is all fine and dandy, but how do you call repeated tetration?
Pentation, Hexation, Heptation, Octation, Nonation...
You can see the pattern.
Pentation
Did you come up with the nested function notation? Or was it already developed?
I haven’t seen this particular style of notation using a leading subscript, but there are a few other ways I have seen, such as something like f²(x) = f(f(x)), which matches the inverse function notation f^(-1)(f(x)) = x. Sometimes the superscript is preceded by the ring operator used for function composition (f∘g).
I just remembered, I’ve also seen a version of iterated composition that was like the sum Σ or product Π operators. It used the Cyrillic letter И which I think stood for the Russian word for “iteration”. It looked like И_{x→5}^{3}(2x+1) = 2(2(2(5)+1)+1)+1 = 47. Try pasting that into a LaTeX viewer if you want the full experience.