I still find the complex definitions a little more beautiful: cos(ix) = cosh(x), and sin(ix) = i sinh(x). Because we know cos^2(x) + sin^2(x)=1, we can conclude cosh^2(x) - sinh^2(x) = 1 So if we introduce x=cosh(t) and y=sinh(t), then plot these points in the plane, we can see why they are called hyperbolic - because they satisfy x^2-y^2=1, which is quite nice.
I prefer the complex definition too. Maybe I just don't understand gd(x) but it seems like he is simply defining sinh(x) as tan^-1(tan(sinh(x)), which doesn't really tell you anything at all
Jacob Calderbank The Gudermannian function is a function with plenty of important applications that precedes our understanding of hyperbolic functions as it is today.
Jacob Calderbank If it really bothers you, then you can always find an alternative expression for gd(x) by employing a different method of integration for sech(x). Namely, gd(x) = 2·arctan(e^x) - π/2, so now, you are defining sinh(x) = tan[2·arctan(e^x) - π/2]
Man, when I was in Calc II I tried bringing this up with my TA. I was really bad at formulating my thoughts, still a pretty crap calculus student at the time. To this day I'm still upset how much she blew me off when I tried to bring it up, basically reinforcing how bad at math I was. Been waiting for this day for a long time, thank you Papa Flammy!
I hadn’t learned the hyperbolic trig functions in my calc class but I’m studying them myself this summer, great timing for this video :) I think their connection to the complex definitions of the trig functions is super interesting!
There is so much to be explored from this. Let me address some complaints and confusion in the comments. 1. Some viewers are arguing that defining sinh(x) := tanh[gd(x)] is redundant and useless because gd(x) = arctan[sinh(x)], hence this amounts to tan(arctan[sinh(x)]) = sinh(x), which is a vacuous definition of sinh(x). However, what people here are forgetting is that arctan[sinh(x)] is only one of multiple expressions for gd(x). In the prerequisite video in which Papa Flammy found the definite integral of sech(t) on (0, x), it was discussed in the comments that there are multiple ways of evaluating the integral. One of the way involves writing sech(t) = 2/[exp(t) + exp(-t)] = 2·exp(t)/[exp(t)^2 + 1], substituting u = exp(t), and simplifying. This results in the expression 2·arctan[exp(x)] - π/2, which makes no direct references to the hyperbolic functions. Therefore, tan(2·arctan[exp(x)] - π/2) = sinh(x) is a suitable definition explicitly in terms of only exponential and trigonometric functions, and using only real numbers. There are multiple other expressions for the function gd(x) as well that do not involve hyperbolic functions directly. 2. If the Gudermannian function is an elementary function, then why have special notation for it? For the same reason we have special notation for sinh(x) and cosh(x). It is reocurring in important applications and it is extremely useful to have concise notation for it, especially because otherwise, working with it would result in very cramped notation with unpleasant combinations of exponential and trigonometric functions that are not sufficiently illuminating in helping understand the properties of the function. The inverse of the Gudermannian function is known as Mercator's function and it also serves an important function for navigation. 3. The Gudermannian function and its inverse satisfy a very interesting and very useful and rare pairwise property. Notice that sech(x) has a Maclaurin series with summands E(2n)·x^(2n)/(2n)!, and sec(x) has a similar Maclaurin series with summands (-1)^n·E(2n)·x^(-2n)/(2n)!, which is no surprise, given that sec(ix) = sech(x). By the way, E(n) is the nth Euler number. Anyhow, the idea is that by antidifferentiating both functions, this results in the Maclaurin series for gd(x) being E(2n)·x^(2n + 1)/(2n + 1)!, while its inverse has the Maclaurin series (-1)^n·E(2n)·x^(2n + 1)/(2n + 1)!. The only difference in the Maclaurin series lies with the factor of (-1)^n, which makes the inversion of the Maclaurin series of each function into the other as simple as it can be without being trivial. It is rare for the Maclaurin series of the inverse to be so simply similar to that of the original function, and this happens to be very useful in calculation. Also, just in case this was not sufficiently evident from what I just stated, what this means is that i·[gd^(-1)](x) = gd(ix), and [gd^(-1)](ix) = i·gd(x). 4. Some comments have brought to the attention the fact that the graph of y = gd(x) is very similar to the graphs of y = arctan(x), y = tanh(x), and y = sqrt(π)/2·erf(x). Why is this the case? Why is there si much overlap? The answer is simple for the first two. gd(x) = arctan[sinh(x)], and x approximates sinh(x) to first-order, so arctan(x) approximates gd(x) as well. Also, it can be proven via some hyperbolic and trigonometric identities that arctan[sinh(x)] = arcsin[tanh(x)]. Therefore, gd(x) = arcsin[tanh(x)]. x approximates arcsin(x) to first-order once again, so tanh(x) approximates gd(x). The final similarity is less obvious. gd(x) is the antiderivative of sech(x), while sqrt(π)/2·erf(x) is the antiderivative of exp(-x^2). If you graph y = sech(x) = 2·exp(x)/[exp(x)^2 + 1] and y = exp(-x^2) on Desmos, you will find that they are very similar: this is to say, these functions approximate one another. Why is this the case? This seems very non-trivial and it does not seem like there is a reason for it to be the case. Take logarithms, giving the graphs for y = -x^2 and y = ln(2) + x - ln[1 + exp(x)^2], or alternatively, y = x^2 and y = -ln(2) - x + ln[1 + exp(x)^2]. With this, it becomes more obvious why the similarities are there, since -ln(2) - x + ln[1 + exp(x)^2] can be approximated with x^2 as long as some correction is applied to the former via series truncation.
A very good comment. As to part 3, another way to see that gd(x) and -i gd(ix) are inverses is using sech x = sec ix and the integral definitions of the gudermannian and its inverse. Of course, this requires knowing that "the" 😁 antiderivatives of sec x and sech x are inverses in the first place.
This is quite beautiful! I should have expected something like that to pop out of the suspensiciously similar relationships: sec²(x)=1+tan²(x) cosh²(x)=1+sinh²(x)
Hi, The function gd expressed by inverse tangent composed with sinh takes values in (-п/2,п/2), so the inverse gd is defined in (-п/2,п/2). Thus on 10:37 you don't have absolute value because secant is positive in (-п/2,п/2).
Why is y = exp(-x^2) similar to y = 2·exp(x)/[exp(x)^2 + 1]? Consider f(x) = exp(x), g(x) = -x^2 and h(x) = 2x/(x^2 + 1). There exists some function F(x) such that F[f(x)] = f[g(x)]. The claim is that h(x) approximated F(x) to at least first-order and at most fourth-order. This approximability is the reason why the curves are similar. Meanwhile, y = gd(x) and y = tanh(x) are similar because gd(x) = arcsin[tanh(x)], and x approximates arcsin(x) to first-order, hence tanh(x) approximates arcsin[tanh(x)] to first-order.
Hey flammy can you please come up with the topology bdw whatever you are bringing to this channel is just way amazing....such a nice pedagogical techniques you have...hope you keep on teaching forever!
I eat 1/cosh apples and a piece of integral bread form 0 to x of sech(t) dt every day for breakfeast and I get nourishment from my trigonometrical wonderland .
Nice video. I'd been wondering if you'd be covering this, given your recent videos on integrating sec x and sech x. Speaking of which, they're a dandy way to see that the inverse of gd(x) is -i gd(ix).
Carlos de Cabo de la Vega Consider the function f(n, m) = n!/(n - m)!, with n = m or n > m, and f(n, m) = 0, with n < m, where n and m are positive whole numbers. Hence f(n + 1, m) - f(n, m) = (n + 1)!/(n + 1 - m)! - n!/(n - m)! = [n!/(n - m)!][(n + 1)/(n + 1 - m) - 1] = f(n, m)[(n + 1)/(n + 1 - m) + (m - n - 1)/(n + 1 - m)] = m/(n + 1 - m)·n!/(n - m)! = m·n!/[n - (m - 1)]! = m·f(n, m - 1). Therefore, f(n + 1, m) - f(n, m) = m·f(n, m - 1). Summing this from n = 1 to n = μ, this implies f(μ + 1, m) - f(1, m) = m·Sum{n = 1, n = μ, f(n, m - 1), or Sum{n = 1, n = μ, f(n, m - 1)} = f(μ + 1, m)/m - f(1, m)/m. Substitute m for m + 1, hence f(μ + 1, m + 1)/(m + 1) - f(1, m + 1)/(m + 1) = Sum{n = 1, n = μ, f(n, m)}. Since m is positive, the minimum value of m is 1, so the minimum value of m + 1 is 2. 1 < 2, so f(1, m + 1)/(m + 1) = 0. Therefore, Sum{n = 1, n = μ, f(n, m)} = f(μ + 1, m + 1)/(m + 1). Rewriting everything with factorials, and changing the summation index from n to λ, this implies Sum{λ = 1, λ = μ, λ!/(λ - m)!} = (μ + 1)!/[(m + 1)(μ - m)!]. Now that I have proven the above formula, the next step is to find a way to write λ^2 as a linear combination of elements of the form λ!/(λ - m)! for various m. This is possible and rather simple. Notice that λ^2 = (λ^2 - λ) + λ = λ(λ - 1) + λ = λ!/(λ - 2)! + λ!/(λ - 1)!. Therefore, Sum{λ = 1, λ = n, λ^2} = Sum{λ = 1, λ = n, λ!/(λ - 1)!} + Sum{λ = 1, λ = n, λ!/(λ - 2)!} = (n + 1)!/[(2 + 1)(n - 2)! + (n + 1)!/[(1 + 1)(n - 1)!] = (n + 1)n(n - 1)/3 + (n + 1)n/2 = [(n + 1)n][(n - 1)/3 + 1/2] = [(n + 1)n][n/3 + 1/6] = n(n + 1)(2n/6 + 1/6) = n(n + 1)(2n + 1)/6, which is the formula that will normally find everywhere. Therefore, the sum of the first n perfect square numbers is equal to n(n + 1)(2n + 1)/6.
I don't know how he does it but I use an extension called Dark Reader to make all sites dark automatically There is also Stylus, which is more versatile but you gotta set the style for every page manually yourself
Here is a copy and paste from another comment I made somewhere else. γ is the real part of a complex number. Using Pythagorean Theorem a^2=b^2+c^2 if c=i*c then a^2=b^2-c^2. Let's go back to γ. γ=1/sqrt(1-(v/c)^2) cos(arctan(v/c))=1/sqrt(1+(v/c)^2) Same thing with the Pythagorean Theorem bit, make v=i*v γ=cos(arctan(iv/c))=1/sqrt(1-(v/c)^2)=c/sqrt(c^2-v^2) What about sin? γ*v/c does actually show up here and there. But if you see that anywhere, and you square it, it's going to wreak havok on everything. Why? Because there is an 'i' that pairs with v. i*γ*v/c=sin(arctan(iv/c))=i*v/sqrt(c^2-v^2) These two are like unit vectors. And because they are from Pythagorean Theorem the sum of their squares equals 1. Or rather, the difference (i^2). (c/sqrt(c^2-v^2))^2+(i*v/sqrt(c^2-v^2)^2=1 What about that arctan? arctan(iv/c)=i*v/c-(i*v/c)^3/3+(i*v/c)^5/5-(i*v/c)^7/7... arctan(iv/c)=i*v/c+i*(v/c)^3/3+i*(v/c)^5/5+i*(v/c)^7/7... We divide both sides by i and 1/i is -i. -i*arctan(i*v/c)=v/c+(v/c)^3/3+(v/c)^5/5+(v/c)^7/7... We kill arctan and switch it for arctanh. arctanh(v/c)=v/c+(v/c)^3/3+(v/c)^5/5+(v/c)^7/7... (e^arctanh(v/c)+e^(-arctanh(v/c)))/2=cosh(arctanh(v/c)) (e^arctanh(v/c)-e^(-arctanh(v/c)))/2=sinh(arctanh(v/c))
Dear Flammable ! Please make a video on How the complex number related with circular function Property it means their contour form is a circle generally
I’m a little confused - could we not produce a definitive solution to the gd ( just by integrating sech x, say blah ) and instead say tan ( blah ) =sinh x , instead of tan (gd(x)) = sinh x?
Wow! Really nice! You worked very well in this stuff, and I guess that maybe thinking about taylor series could be very nice. I love your work and you helped me very much, since I started to see this videos. Have a nice day!!
The Maclaurin series of sech(x) is given by the sum of E(2n)·x^(2n)/(2n)! on the set of all the natural numbers indexed by n, with radius of convergence π/2 exclusive. In this case, E(n) represents the nth Euler number. The antiderivative, gd(x), has therefore the Maclaurin series given by the sum on the same set of E(2n)·x^(2n + 1)/(2n + 1)!, with the same region of convergence. What is interesting is that the inverse function of gd(x) is the antiderivative of sec(x), which has almost the same Maclaurin series as sech(x), except with a factor of (-1)^n multiplying every term. Therefore, the Maclaurin series of [gd^(-1)](x) also has almost the same Maclaurin series as gd(x), except with (-1)^n multiplying every term.
The trigonometric functions parametrize the equations of ellipses. The hyperbolic functions parametrize the equations of hyperbolas. They are very much akin, with this being the only difference.
Relevant link: en.wikipedia.org/wiki/Gudermannian_function Chalk it up also as yet another instance of naming a discovery in mathematics after someone other than the discoverer, Johann Lambert.
Will somebody please demystify this In reference to ellipse and hyperbola we come across these terms-- parametric forms-- a s in theta b cos theta --- Can somebody show these on the diagram also. To me it looks like the work of some absent minded professor.
The equation of an ellipse with center (0, 0) is (x/a)^2 + (y/b)^2 = 1. Therefore, x = a·cos(t) and y = b·sin(t) parametrizes the ellipse, because then [a·cos(t)/a]^2 + [b·sin(t)/b]^2 = cos(t)^2 + sin(t)^2 = 1. Notice that the special case a = b simply gives a circle, and a = b = 1 gives the unit circle. The question is, why does this also work for parametrizing hyperbolas? Let t = iT. Hence cos(iT)^2 + sin(iT)^2 = 1. cos(iT) = cosh(T), and sin(iT) = i·sinh(T), hence cosh(T)^2 + i^2·sinh(T)^2 = 1, and i^2 = -1, hence cosh(T)^2 - sinh(T)^2 = 1, which is the equation of the unit hyperbola for x' = cosh(T) and y' = sinh(T).
Because gd(x) has many multiple expressions aside from just arctan[sinh(x)] and is a special function with multiple important applications, it is not an arbitrary choice of functional composition.
@@PapaFlammy69 The worst offenders seem to be 12 * pi / e, which is something like 13.8687..., and tan^-1(11), which seems to be 1.4801... most of the others are just off by a small tick, though.
Yes. This is because gd(x) = arcsin[tanh(x)], and x is the first-order approximation of arcsin(x), hence tanh(x) is the first-order non-linear approximation of arcsin[tanh(x)].
Misleading thumbnail. Should be a blackboard full of formulas waste of time for someone looking to have the functions related graphically as the thumbnail implies...
"If you only have 1 over cosh apples, you only have 1 over cosh apples." ---- Papa Flammy 2020
Does that make sense
>nah
This statement is somehow both trivially true and also makes no sense
But . . . but . . . what about Mackintosh apples?
Why does he like apples so much ?
Sasta Darkseid Because an apple a day keeps the doctor away.
The real reason why it looks like just an inverse tangent curve is because sinh(x) = x.
Yes.
I still find the complex definitions a little more beautiful:
cos(ix) = cosh(x), and sin(ix) = i sinh(x).
Because we know cos^2(x) + sin^2(x)=1, we can conclude cosh^2(x) - sinh^2(x) = 1
So if we introduce x=cosh(t) and y=sinh(t), then plot these points in the plane, we can see why they are called hyperbolic - because they satisfy x^2-y^2=1, which is quite nice.
I prefer the complex definition too. Maybe I just don't understand gd(x) but it seems like he is simply defining sinh(x) as tan^-1(tan(sinh(x)), which doesn't really tell you anything at all
Jacob Calderbank The Gudermannian function is a function with plenty of important applications that precedes our understanding of hyperbolic functions as it is today.
He already talked about the fact that cosh(t)^2 - sinh(t)^2 = 1 in the video. This fact implies the connection to the complex numbers above.
Jacob Calderbank If it really bothers you, then you can always find an alternative expression for gd(x) by employing a different method of integration for sech(x). Namely, gd(x) = 2·arctan(e^x) - π/2, so now, you are defining sinh(x) = tan[2·arctan(e^x) - π/2]
@@angelmendez-rivera351 why -pi/2 as a constant?
Man, when I was in Calc II I tried bringing this up with my TA. I was really bad at formulating my thoughts, still a pretty crap calculus student at the time.
To this day I'm still upset how much she blew me off when I tried to bring it up, basically reinforcing how bad at math I was.
Been waiting for this day for a long time, thank you Papa Flammy!
Jens Fehlau be like:
"If I have 0 apples, then I have 0 apples."
I hadn’t learned the hyperbolic trig functions in my calc class but I’m studying them myself this summer, great timing for this video :) I think their connection to the complex definitions of the trig functions is super interesting!
:)
I love these kinds my man. Always learn something interesting I'd say!
There is so much to be explored from this. Let me address some complaints and confusion in the comments.
1. Some viewers are arguing that defining sinh(x) := tanh[gd(x)] is redundant and useless because gd(x) = arctan[sinh(x)], hence this amounts to tan(arctan[sinh(x)]) = sinh(x), which is a vacuous definition of sinh(x). However, what people here are forgetting is that arctan[sinh(x)] is only one of multiple expressions for gd(x). In the prerequisite video in which Papa Flammy found the definite integral of sech(t) on (0, x), it was discussed in the comments that there are multiple ways of evaluating the integral. One of the way involves writing sech(t) = 2/[exp(t) + exp(-t)] = 2·exp(t)/[exp(t)^2 + 1], substituting u = exp(t), and simplifying. This results in the expression 2·arctan[exp(x)] - π/2, which makes no direct references to the hyperbolic functions. Therefore, tan(2·arctan[exp(x)] - π/2) = sinh(x) is a suitable definition explicitly in terms of only exponential and trigonometric functions, and using only real numbers. There are multiple other expressions for the function gd(x) as well that do not involve hyperbolic functions directly.
2. If the Gudermannian function is an elementary function, then why have special notation for it? For the same reason we have special notation for sinh(x) and cosh(x). It is reocurring in important applications and it is extremely useful to have concise notation for it, especially because otherwise, working with it would result in very cramped notation with unpleasant combinations of exponential and trigonometric functions that are not sufficiently illuminating in helping understand the properties of the function. The inverse of the Gudermannian function is known as Mercator's function and it also serves an important function for navigation.
3. The Gudermannian function and its inverse satisfy a very interesting and very useful and rare pairwise property. Notice that sech(x) has a Maclaurin series with summands E(2n)·x^(2n)/(2n)!, and sec(x) has a similar Maclaurin series with summands (-1)^n·E(2n)·x^(-2n)/(2n)!, which is no surprise, given that sec(ix) = sech(x). By the way, E(n) is the nth Euler number. Anyhow, the idea is that by antidifferentiating both functions, this results in the Maclaurin series for gd(x) being E(2n)·x^(2n + 1)/(2n + 1)!, while its inverse has the Maclaurin series (-1)^n·E(2n)·x^(2n + 1)/(2n + 1)!. The only difference in the Maclaurin series lies with the factor of (-1)^n, which makes the inversion of the Maclaurin series of each function into the other as simple as it can be without being trivial. It is rare for the Maclaurin series of the inverse to be so simply similar to that of the original function, and this happens to be very useful in calculation. Also, just in case this was not sufficiently evident from what I just stated, what this means is that i·[gd^(-1)](x) = gd(ix), and [gd^(-1)](ix) = i·gd(x).
4. Some comments have brought to the attention the fact that the graph of y = gd(x) is very similar to the graphs of y = arctan(x), y = tanh(x), and y = sqrt(π)/2·erf(x). Why is this the case? Why is there si much overlap? The answer is simple for the first two. gd(x) = arctan[sinh(x)], and x approximates sinh(x) to first-order, so arctan(x) approximates gd(x) as well. Also, it can be proven via some hyperbolic and trigonometric identities that arctan[sinh(x)] = arcsin[tanh(x)]. Therefore, gd(x) = arcsin[tanh(x)]. x approximates arcsin(x) to first-order once again, so tanh(x) approximates gd(x). The final similarity is less obvious. gd(x) is the antiderivative of sech(x), while sqrt(π)/2·erf(x) is the antiderivative of exp(-x^2). If you graph y = sech(x) = 2·exp(x)/[exp(x)^2 + 1] and y = exp(-x^2) on Desmos, you will find that they are very similar: this is to say, these functions approximate one another. Why is this the case? This seems very non-trivial and it does not seem like there is a reason for it to be the case. Take logarithms, giving the graphs for y = -x^2 and y = ln(2) + x - ln[1 + exp(x)^2], or alternatively, y = x^2 and y = -ln(2) - x + ln[1 + exp(x)^2]. With this, it becomes more obvious why the similarities are there, since -ln(2) - x + ln[1 + exp(x)^2] can be approximated with x^2 as long as some correction is applied to the former via series truncation.
A very good comment. As to part 3, another way to see that gd(x) and -i gd(ix) are inverses is using sech x = sec ix and the integral definitions of the gudermannian and its inverse. Of course, this requires knowing that "the" 😁 antiderivatives of sec x and sech x are inverses in the first place.
This is quite beautiful!
I should have expected something like that to pop out of the suspensiciously similar relationships:
sec²(x)=1+tan²(x)
cosh²(x)=1+sinh²(x)
Sinh(x)/h=sin(x)
sin(x)/n=six=6
Hi, The function gd expressed by inverse tangent composed with sinh takes values in (-п/2,п/2), so the inverse gd is defined in (-п/2,п/2). Thus on 10:37 you don't have absolute value because secant is positive in (-п/2,п/2).
I’ve really been enjoying these videos on hyperbolics, they’re cool af
6:56
Who else thought of the error function?
Everytime I see the hyperbolic tangent
It is no coincidence, since the graph of the Gaussian function, y = e^(-x^2), is similar to the graph of y = 2e^x/[e^(2x) + 1] = sech(x).
Why is y = exp(-x^2) similar to y = 2·exp(x)/[exp(x)^2 + 1]? Consider f(x) = exp(x), g(x) = -x^2 and h(x) = 2x/(x^2 + 1). There exists some function F(x) such that F[f(x)] = f[g(x)]. The claim is that h(x) approximated F(x) to at least first-order and at most fourth-order. This approximability is the reason why the curves are similar.
Meanwhile, y = gd(x) and y = tanh(x) are similar because gd(x) = arcsin[tanh(x)], and x approximates arcsin(x) to first-order, hence tanh(x) approximates arcsin[tanh(x)] to first-order.
That clock triggers every fiber of my being... 😠
Joking, love ya flam
😁
Great explanation thanks!
I'm emased how the quality on these videos has risen well done Papa.
you're the only maths channel I can watch after a Calculus 2 exam without feeling bad about myself ^^
"If you have 1/cosh apples you only have 1/cosh apples " 3:09 to 3:13
Hey flammy can you please come up with the topology bdw whatever you are bringing to this channel is just way amazing....such a nice pedagogical techniques you have...hope you keep on teaching forever!
Wow... super interesting... i really like that you have been incorporating desmos also
:) Glad it's to your liking!
Just beautiful ❤️
Therefore I want to watch it again.
Thank you dear *Flammable Maths*
I eat 1/cosh apples and a piece of integral bread form 0 to x of sech(t) dt every day for breakfeast and I get nourishment from my trigonometrical wonderland .
I was like 'isn't is just a complex argument' Papa blowing minds over here.
A very beautiful demonstration!
Thank you🔝🔝🔝
In France they introduce the hyperbolic fonctions just by giving their definition : (exp(x) - exp(x))/2
Thank you!
I'm triggered by the red slash over the set of the complex numbers in the thumbnail.
xD
Conic sections and conformal transforms are an interesting thing here
What is a little a with arrow on top, help, thanks.
Nice video. I'd been wondering if you'd be covering this, given your recent videos on integrating sec x and sech x. Speaking of which, they're a dandy way to see that the inverse of gd(x) is -i gd(ix).
But whats the sum of the first n natural numbers using formula and easy derivation of the formula
I mean the squares of the first n natural numbers. Please make a video, papa
Carlos de Cabo de la Vega Consider the function f(n, m) = n!/(n - m)!, with n = m or n > m, and f(n, m) = 0, with n < m, where n and m are positive whole numbers. Hence f(n + 1, m) - f(n, m) = (n + 1)!/(n + 1 - m)! - n!/(n - m)! = [n!/(n - m)!][(n + 1)/(n + 1 - m) - 1] = f(n, m)[(n + 1)/(n + 1 - m) + (m - n - 1)/(n + 1 - m)] = m/(n + 1 - m)·n!/(n - m)! = m·n!/[n - (m - 1)]! = m·f(n, m - 1). Therefore, f(n + 1, m) - f(n, m) = m·f(n, m - 1). Summing this from n = 1 to n = μ, this implies f(μ + 1, m) - f(1, m) = m·Sum{n = 1, n = μ, f(n, m - 1), or Sum{n = 1, n = μ, f(n, m - 1)} = f(μ + 1, m)/m - f(1, m)/m. Substitute m for m + 1, hence f(μ + 1, m + 1)/(m + 1) - f(1, m + 1)/(m + 1) = Sum{n = 1, n = μ, f(n, m)}. Since m is positive, the minimum value of m is 1, so the minimum value of m + 1 is 2. 1 < 2, so f(1, m + 1)/(m + 1) = 0. Therefore, Sum{n = 1, n = μ, f(n, m)} = f(μ + 1, m + 1)/(m + 1). Rewriting everything with factorials, and changing the summation index from n to λ, this implies Sum{λ = 1, λ = μ, λ!/(λ - m)!} = (μ + 1)!/[(m + 1)(μ - m)!].
Now that I have proven the above formula, the next step is to find a way to write λ^2 as a linear combination of elements of the form λ!/(λ - m)! for various m. This is possible and rather simple. Notice that λ^2 = (λ^2 - λ) + λ = λ(λ - 1) + λ = λ!/(λ - 2)! + λ!/(λ - 1)!. Therefore, Sum{λ = 1, λ = n, λ^2} = Sum{λ = 1, λ = n, λ!/(λ - 1)!} + Sum{λ = 1, λ = n, λ!/(λ - 2)!} = (n + 1)!/[(2 + 1)(n - 2)! + (n + 1)!/[(1 + 1)(n - 1)!] = (n + 1)n(n - 1)/3 + (n + 1)n/2 = [(n + 1)n][(n - 1)/3 + 1/2] = [(n + 1)n][n/3 + 1/6] = n(n + 1)(2n/6 + 1/6) = n(n + 1)(2n + 1)/6, which is the formula that will normally find everywhere. Therefore, the sum of the first n perfect square numbers is equal to n(n + 1)(2n + 1)/6.
How do you make Desmos dark mode?
I don't know how he does it but I use an extension called Dark Reader to make all sites dark automatically
There is also Stylus, which is more versatile but you gotta set the style for every page manually yourself
wow another fantastischer apfel video jens, this brought a tear to my eye.
wow i was just looking into this for utm coordinates
nice! :)
Be honest, you only inverted the colors cause it looked more like a 3B1B video :P
nah, my eyes are just bad af and I would like to relax them a bit while working at me compu-boi-chan
Here is a copy and paste from another comment I made somewhere else.
γ is the real part of a complex number. Using Pythagorean Theorem a^2=b^2+c^2 if c=i*c then a^2=b^2-c^2.
Let's go back to γ.
γ=1/sqrt(1-(v/c)^2)
cos(arctan(v/c))=1/sqrt(1+(v/c)^2)
Same thing with the Pythagorean Theorem bit, make v=i*v
γ=cos(arctan(iv/c))=1/sqrt(1-(v/c)^2)=c/sqrt(c^2-v^2)
What about sin? γ*v/c does actually show up here and there. But if you see that anywhere, and you square it, it's going to wreak havok on everything. Why? Because there is an 'i' that pairs with v.
i*γ*v/c=sin(arctan(iv/c))=i*v/sqrt(c^2-v^2)
These two are like unit vectors. And because they are from Pythagorean Theorem the sum of their squares equals 1. Or rather, the difference (i^2).
(c/sqrt(c^2-v^2))^2+(i*v/sqrt(c^2-v^2)^2=1
What about that arctan?
arctan(iv/c)=i*v/c-(i*v/c)^3/3+(i*v/c)^5/5-(i*v/c)^7/7...
arctan(iv/c)=i*v/c+i*(v/c)^3/3+i*(v/c)^5/5+i*(v/c)^7/7...
We divide both sides by i and 1/i is -i.
-i*arctan(i*v/c)=v/c+(v/c)^3/3+(v/c)^5/5+(v/c)^7/7...
We kill arctan and switch it for arctanh.
arctanh(v/c)=v/c+(v/c)^3/3+(v/c)^5/5+(v/c)^7/7...
(e^arctanh(v/c)+e^(-arctanh(v/c)))/2=cosh(arctanh(v/c))
(e^arctanh(v/c)-e^(-arctanh(v/c)))/2=sinh(arctanh(v/c))
Love the new format with desmos! Papa Flammy bLeSS
:)
Amazing video🔥🔥🔥
3:22 sneaky
Dear Flammable ! Please make a video on How the complex number related with circular function Property it means their contour form is a circle generally
Wouldn't the inverse be the reflection on the y=x bisector, rather than on the origin?
Yes. I think this is most likely what he meant. He probably mixed the idea of reflection with the idea of one-dimensional symmetry.
You can express gd^-1(x) = ln(abs(sec(x)+tan(x))) since secx has an elementary antiderivative
Fai un video anche sulle relazioni fra le fumzioni trigo e iperboliche e i numeri complessi
^^^!
Make a video about the Bernoulli numbers!!
And the Euler numbers.
And the Stirling numbers! (Of the first and second kind)
I’m a little confused - could we not produce a definitive solution to the gd ( just by integrating sech x, say blah ) and instead say tan ( blah ) =sinh x , instead of tan (gd(x)) = sinh x?
We can. gd(x) is just shortcut notation and is simply better to work with. gd(x) = 2·arctan[exp(x)] - π/2.
Ah yes, *_APPLES_*
Wow! Really nice! You worked very well in this stuff, and I guess that maybe thinking about taylor series could be very nice. I love your work and you helped me very much, since I started to see this videos. Have a nice day!!
The Maclaurin series of sech(x) is given by the sum of E(2n)·x^(2n)/(2n)! on the set of all the natural numbers indexed by n, with radius of convergence π/2 exclusive. In this case, E(n) represents the nth Euler number. The antiderivative, gd(x), has therefore the Maclaurin series given by the sum on the same set of E(2n)·x^(2n + 1)/(2n + 1)!, with the same region of convergence.
What is interesting is that the inverse function of gd(x) is the antiderivative of sec(x), which has almost the same Maclaurin series as sech(x), except with a factor of (-1)^n multiplying every term. Therefore, the Maclaurin series of [gd^(-1)](x) also has almost the same Maclaurin series as gd(x), except with (-1)^n multiplying every term.
@@angelmendez-rivera351 , u can try sec(ix) to get sech(x); And a huge thanks for recalling the Maclaurin series
I like the apple analogy.
Can you explain hyperbolic functions. BTW I understand trigonometry, but not hyperbolic functions.
The trigonometric functions parametrize the equations of ellipses. The hyperbolic functions parametrize the equations of hyperbolas. They are very much akin, with this being the only difference.
@@angelmendez-rivera351 thanks mate.
cool t-shirt though
Thx, it's always available at my shop! :3
7:38 looks like from one of Matt Parker's vids lol
Relevant link: en.wikipedia.org/wiki/Gudermannian_function Chalk it up also as yet another instance of naming a discovery in mathematics after someone other than the discoverer, Johann Lambert.
lambert already has his function. it’s a damn good one at that
13:50 looks more hyperbolic than logarithmic to me
It's both, because logarithmic curves are just hyperbolic curves post-integration.
In other words, the area enclosed by hyperbolic curves and the x and y axes is given by logarithmic functions.
Will somebody please demystify this
In reference to ellipse and hyperbola we come across these terms-- parametric forms--
a s in theta b cos theta
---
Can somebody show these on the diagram also.
To me it looks like the work of some absent minded professor.
The equation of an ellipse with center (0, 0) is (x/a)^2 + (y/b)^2 = 1. Therefore, x = a·cos(t) and y = b·sin(t) parametrizes the ellipse, because then [a·cos(t)/a]^2 + [b·sin(t)/b]^2 = cos(t)^2 + sin(t)^2 = 1. Notice that the special case a = b simply gives a circle, and a = b = 1 gives the unit circle. The question is, why does this also work for parametrizing hyperbolas? Let t = iT. Hence cos(iT)^2 + sin(iT)^2 = 1. cos(iT) = cosh(T), and sin(iT) = i·sinh(T), hence cosh(T)^2 + i^2·sinh(T)^2 = 1, and i^2 = -1, hence cosh(T)^2 - sinh(T)^2 = 1, which is the equation of the unit hyperbola for x' = cosh(T) and y' = sinh(T).
Why isnt this any different than f(f^-1(g(x)))=g(x) and we just call f^-1(g(x))=gd(x)
Because gd(x) has many multiple expressions aside from just arctan[sinh(x)] and is a special function with multiple important applications, it is not an arbitrary choice of functional composition.
Seem's like a nice guy. Nice.
Dude, you try to sell us an engineering clock with the numbers all in the wrong positions... dude.
wat? Not really. Everything's approximately accurate
@@PapaFlammy69 The worst offenders seem to be 12 * pi / e, which is something like 13.8687..., and tan^-1(11), which seems to be 1.4801... most of the others are just off by a small tick, though.
I put the main steps here on a calculus 2 homework in 1995.
Geometry analysis please
Waa not say y as waa 😂😅
Gooda Man Yum!
Omg dark mode desmos how papa???
Nice! Make another Olympiad Problem solving vid
Cool
gd(x) looks a lot like tanh(x)...
yeye
Yes. This is because gd(x) = arcsin[tanh(x)], and x is the first-order approximation of arcsin(x), hence tanh(x) is the first-order non-linear approximation of arcsin[tanh(x)].
you look so evil. great video man
I always enjoy your lesson
Secant of wah
secant squared of whyaahhhh
INTEGARAL
Misleading thumbnail. Should be a blackboard full of formulas waste of time for someone looking to have the functions related graphically as the thumbnail implies...
test
ayyy
DOes no one find it just lovely how he can NOT pronounce integrals lmao
donot
But you’re not funny
meth