@@jawadbenbrahim5933 my function as a much smaller infinity then the Taylor series. But infinity is an Infinity, and yes it would make sense for me putting infinity to the equation and calling it complete but I'm also doing it for computer graphics.
@@SilentALumethe function you have created converges to the function (1/2 (-2 EllipticTheta(2, 0, 16/e^4) - EllipticTheta(4, 0, 2/e) + (2 EllipticTheta(3, -π x, e^(π^2/(-1 + log(2)))) - EllipticTheta(3, -(π x)/2, e^(π^2/(4 (-1 + log(2)))))) sqrt(π/(1 - log(2)))))/(EllipticTheta(4, 0, 2/e)), this function is different from cos(pi*x) as cos(pi*x) = 0 at x = n + 1/2 for n beeing a integer. While your aproximation is zero at x = n + 1/2 + epsilon where epsilon is the error, epsilon is on the order of 10^-(10), that beeing said it seems the two functions have the same tops and bottoms, basically as R goes to infinity the margin of error goes to 10^(-10)
@@ivanb493he‘s using imaginary exponents. Those are automatically trig functions. If you think this suggestion would be cheating then the video is cheated too.
import matplotlib.pyplot as plt import math def apsin(x): pi = math.pi multy = 1 # in what interval is x in relation to the roots of cos(x) #inty = x / 2 # this gives an approximation of how many roots in front of x . sign = 1 if(math.ceil((x)/pi)%2 == 0): sign = -1 x = (x + pi/2 - (pi *math.ceil((x)/pi)))
inty = 1 return sign*(1/(9*pi*pi*pi*pi/16)) * (x+((inty+1)*pi)-pi/2)*(x+((inty)*pi)-pi/2)*(x-((inty)*pi)-pi/2)*(x-((inty)*pi)+pi/2) listy = [] liste= [] print(math.sin(360)) print(apsin(360)) for i in range(1000): print('ap') listy.append(apsin(i)) liste.append(math.sin(i))
I think part of the reason why the quadratic in the exponent helped in making the cos function approximation is because of the jacobi theta function. Basically the third order jacobi theta function is θ₃(z, q) = Σ[n=-∞,∞](q^(n²) * e^(2niz)). When z = 0, the imaginary part (e^(2niz)) disappears, so we get θ₃(0, q) = Σ[n=-∞,∞](q^(n²))., splitting this into two sums θ₃(0, q) = q^(0²) + Σ[n=1,∞](q^(n²)) +Σ[n=-1,-∞](q^(n²)) = 1 + Σ[n=1,∞](q^(n²)) +Σ[n=-1,-∞](q^(n²)), notice by symmetry of the sum ((-n)² = n²), Σ[n=1,∞](q^(n²)) = Σ[n=-1,-∞](q^(n²)), therefore we have θ₃(0, q) = 2*Σ[n=1,∞](q^(n²)) + 1, therefore Σ[n=1,∞](q^(n²)) = (θ₃(0, q) - 1)/2. Then for a lot of these sums that appear we can just express the sum in terms of the θ function. By substitution and rearranging we can express many if not all the sum terms in terms of the theta function. The jacobi θ function is essentially an elliptic analogue of the exponential and does exhibit quasi-double periodicity, basically it means that the periodicity goes out to two dimensions and only roughly follows the periodic nature so f(z+u) and f(z+v) may not equal f(z) exactly (for this u and v are linearly independent), but there still follows a trend. Though because the imaginary part is removed it is only singly quasi-periodic hence yielding the cos approximation. Sorry if I made any mistakes make sure to tell me. en.wikipedia.org/wiki/Doubly_periodic_function mathworld.wolfram.com/JacobiThetaFunctions.html en.wikipedia.org/wiki/Quasiperiodicity
True but just think about how the numbers involved here don't blow up. This can produce an amazing aproximation with as little as 6 terms to infinity with wrapping.
@@gamerboy7224 please do because I might be missing something. What I did to extend it was this: (x - (2pi × floor(x/(2pi)))/pi. Replace x with that and it will use the single cycle for the whole domain.
I was thinking this might have applications to like work out cos quickly without a calc until I saw the final equation XD it feels like a taylor expansion anyways but in the most roundabout way possible
Instead of adding parabolas in the beginning, you could multiply them y = (1-(x/0.5pi)²)*(1-(x/1.5pi)²)*.. and that will be exact as you add infinite factors...
Great work here! To me, it seems that you've derived a quirky Fourier-Poisson approximation with a mainly hyperbolic cosine approximation. I think one of the more concrete places to start would be the complex exponential definition of trigonometry, and approximate that, instead of doing visual approximation. Overall, great job though!
Divide the input by 2pi and take the remainder. Then you only need to approximate that little range from 0 to 2pi, and every other input will work too. It's what computers actually do when they calculate these functions. It's called modular division and range reduction, and it's used everywhere. You can actually do more than 2pi because between 0 and pi is symmetrical to the part between pi and 2pi. And between 0 and pi/2 is also symmetrical to the part between pi/2 and pi. That leaves you with a tiny little piece of curve. And if you can approximate it, you get the rest of the values everywhere else. I tried a quadratic and got -0.35*x^2 -0.1*x -1. Looks pretty close by eye. I'm sure it's possible to do infinitely better of course. Computers can break it up into many segments, with lookup tables for the best fitting polynomial for each segment. You can do even better than that, but lookup tables and polynomials are very fast to compute.
You might have just accidentally created an actual expansion of cos, especially with using e in your constant (cos(x) = (exp(ix)+exp(-ix))/2 after all). Also, the second sum of 1 from 1 to R is just equal to R, the sum is unnecessary.
the (D^ix + D^-ix) /2 type stuff you have going on in there is literally just the definition of cos for D=e. its not exactly what you have but something like that is going on there
from math import pi def cos_approximations(x): R=10 total=0
e=2.71828182846 D=e/2 for n in range(1,R+1): d1=-D**(-(x/pi+1-2*n)**2) d2=D**(-(-x/pi+1-2*n)**2) d3=2*D**(-(2*n-1)**2) denom=0 for i in range(1,R+1): NUM=D**(8*i-4)+1-2*D**(4*i-1) DENOM=2*D**(4*i*i) denom+=NUM/DENOM total+=(d1-d2+d3)/denom+1/R return total print(cos_approximations(1)) #--> 0.5403023059153995 translating bros formula in python code 💀
also, this is what the original cos sin functions look like in python code: from math import pi, factorial def cos(x): x=x%(2*pi) total=0 for n in range(10): total+=((-1)**n * (x**(2*n)))/(factorial(2*n)) return total def sin(x): x=x%(2*pi) total=0 for n in range(10): total+=((-1)**n * (x**(2*n+1)))/factorial(2*n+1) return total
It seemed quite obvious that what you was actually doing was to generate an alternative (and much more complicated) way to express the Taylor series? I clicked on this video expecting to see a fast approximation, useful for things like games/computer graphics, and in the beginning, this seemed like what you were doing, but then wham! 🙂
as people probably already said. the cos function is very closely related to the exponential function. And you probably made the vanishing part of the gaussian go to zero with the limits
Ok this is cool, but by using the exponential function to approximate a trigonometric function (cosine), aren't you effectively approximating a trigonometric function with a trigonometric function? Since cos(x)=(1/2)(e^(ix)+e^-(ix)) and your equation looks oddly similar, in the sense that it is in the form of (the sum of) (e^f(x) - e^g(x))/c + some error, where the error decreases as I increases. I don't feel like attempting to prove it, but this looks like some sort of taylor expansion of Euler's formula. I might be wrong of course.
One thing i made to approximate cosine with fast computationally is. This works because its fitted to the first quartile range of a cosine function it's accurate to 0.1% a = mod(x,1)-0.5 c = mod(x,2)-a-1 b = a^2(4.897-3.588*a^2)-1 Then graph 2b*c
If youre raising e to imaginary powers then youre no longer approximating cos. By eulers identity we can derive cosine as exactly equal to: cos(x) = (e^ix + e^-ix)/2 So using complex exponents is kinda cheating
Hay man this is really amazing. I actually have an interesting proposition for you. So I have this idea on how to find the inverse of y = x +sinx , but it requires me to replace one function with a different function having the same graph at one step. And the idea isn't about replacing the function at the start too. I cant explain it here. Maybe the idea is good maybe its not, I think it needs some work so I wanna share it with you. Is there some way I could get in contact with you? If this does work by some miracle I think we would be closer to solving the Kepler equation. Its atleast worth a shot.
your function literally uses euler's formula. if you would look into euler's formula, you would pretty much immediately just come up with (e^(ix)+e^(-ix))/2 = cos(x). takes less than 10 mins to figure that out. but hey, congrats atleast for fiinding a way more complicated way of writing that equasion.
@@mismis3153 If you look at the variables, this is a summation of fractions of exponents of D to imaginary values with D being e/2. euler's formula. I bet you could simplify it to (e^(ix)+e^(-ix))/2
@@velocity_raptor you can't... the expression is completely real valued, and the exponents are always squared. The only way to find a closed form is to use... the jacobi theta functions. in the numerator, the first 2 sums can be expressed as theta_3, the 3rd as theta_2. The one in the denominator as theta_4 and the rightmost sum simplifies to just 1. It's long and messy but I can send you the entire expression if you are interested.
So, there’s another way of doing this. So cos X in degrees is SQRT(1- sin^2 X) Can we deduce the sin X, no we can’t however we know the sin X = chord 2X/2 And we can derive the chord of X rather easily. So for any circle of radius 1 the chord of 180° = 2 and the chord of 60° = 1. As it turns out also the chord of 108° is the magic number. This gives the sine of 54°. This flips to give the sine of 36° as cosine of 54°. Which then gives the chord of 72°. These three components can be used to calculate the sin of every angle between 0 and 90° at a spacing of 1.5° again to the precision of whatever device you are using. In fact this is an artificial degree limitation we put on the chord. However we can continue to define a circle (sin, cos) at 0.75°, 0.375° and so on intervals. As we do this we can define a circle to the point where cos = 1 for the smallest degreed angle above zero in the Markov chain. For 8 digit resolution it’s about 13 iterations. So let’s think about this mathematically. If we halve say 180° -> 90° and so on until we get to the point that the bisectors of 2x = 1 and x = 1, a x° very small angle, then we can argue that the function in chord x between 2x and x is indistinguishable from a strait line. So let’s say we wanted to know the chord of 140.25 degrees. Well first step is counter intuitive but we find our range of computational linearity. On this device, an IPad that is achieved at an angle between 0.00000067055225372315° and its double. Next we find 140.25 /90 = 1.55833333333333. Then multiply the two to get 0.00000104494392871857° on this particular device. And so at this point, having done no trigonometry at all we can use a Markov chain to estimate the sin of 70°. We need one other thing which brings the trig back into the equation, that is pi/180°. Chord X = X * pi/180° when X is below 2E-6°. Our formula is Chord 2x = 2 * Chord * Bisector X The bisector X = SQRT(1 - chord^2 x/4) Let’s see what happens. The result of the Markov chain is Estimate of 70.125. cos° = 0.339969239730994 sin° 0.940436556093355 Computer value . cos° = 0.339969239730981 sin° 0.940436556093355 It should be noted that the angle returned was 140.250000000002 This was not optimized and I probably could get a more precise estimate if I assumed that the sin x was linear for larger values of X. In addition if we asked the basic question, what is the precision of a graph of say .10 meter in radius is at best 10 microns. So the error in this case tolerated is 10E-5, in other words I have 10 magnitudes more precision than required to complete the problem. I should note that I am not inventing new math here. The plimpton 322 triples essentially defined points on the outside of a circle around 1800 BCE, about the same time the square root of 2 was formalized. With Pythagoras, our not so friendly cultic mathematician, we could define a circle in terms of 180, 165, 150, 135, 120, 105, 90, 75, 60, 45, 30, 15°, and 0° rather easily. With ptolemies quadrilaterals we can get this to every third angle in terms of chords. We can mathematically readily estimate the chord of 40° which in all its permutations gives chords to 1° and signs and cosines to 1/2 degree. Which is usually enough. There is something I forgot to mention. We can also plot direct onto the graph using chord if we don’t want to use sines or cosines as an intermediate.
Now prove that as R→∞, your function becomes the cos function.
@@jawadbenbrahim5933 my function as a much smaller infinity then the Taylor series. But infinity is an Infinity, and yes it would make sense for me putting infinity to the equation and calling it complete but I'm also doing it for computer graphics.
@@SilentALume ngl you still need that x/pi tho
@@SilentALumethe function you have created converges to the function (1/2 (-2 EllipticTheta(2, 0, 16/e^4) - EllipticTheta(4, 0, 2/e) + (2 EllipticTheta(3, -π x, e^(π^2/(-1 + log(2)))) - EllipticTheta(3, -(π x)/2, e^(π^2/(4 (-1 + log(2)))))) sqrt(π/(1 - log(2)))))/(EllipticTheta(4, 0, 2/e)), this function is different from cos(pi*x) as cos(pi*x) = 0 at x = n + 1/2 for n beeing a integer. While your aproximation is zero at x = n + 1/2 + epsilon where epsilon is the error, epsilon is on the order of 10^-(10), that beeing said it seems the two functions have the same tops and bottoms, basically as R goes to infinity the margin of error goes to 10^(-10)
Welcome back Ramanujan
💀💀💀
I feel like this video went from "oh huh I see where he's going with this" to "what the fuck" in the span of 0.2 seconds.
Wait until he finds out about sin(x+π/2)
☠️
thats cheating ;p
@@ivanb493he‘s using imaginary exponents. Those are automatically trig functions. If you think this suggestion would be cheating then the video is cheated too.
This man went from watching 3blue1brown to graphing complex equations with custom colors in 3 seconds
I was not expecting to get even that close.
I'll try to make something better
import matplotlib.pyplot as plt
import math
def apsin(x):
pi = math.pi
multy = 1
# in what interval is x in relation to the roots of cos(x)
#inty = x / 2 # this gives an approximation of how many roots in front of x .
sign = 1
if(math.ceil((x)/pi)%2 == 0):
sign = -1
x = (x + pi/2 - (pi *math.ceil((x)/pi)))
inty = 1
return sign*(1/(9*pi*pi*pi*pi/16)) * (x+((inty+1)*pi)-pi/2)*(x+((inty)*pi)-pi/2)*(x-((inty)*pi)-pi/2)*(x-((inty)*pi)+pi/2)
listy = []
liste= []
print(math.sin(360))
print(apsin(360))
for i in range(1000):
print('ap')
listy.append(apsin(i))
liste.append(math.sin(i))
print(apsin(i))
print(math.sin(i))
plt.plot(listy)
plt.plot(liste)
plt.show()
i made a sin function by accident
Sigma
Cosine = e^(ix).real
Sine = e^(ix).imag
You can also convert this to:
i^x.real = cos(2x/pi)
i^x.imag = sin(2x/pi)
2:45 where the trivial stuff begins
thanks 🥰🥰🥰
2:43 “this video is gonna take about 2π”
turns out the video length is just about 6:28 haha
Lol. Prolly intentional.
2pi minutes is around 6:17 as a youtube timestamp
@@uggupugguwhat?
@@jesp9435let him cook
@@uggupuggu mmmmh....
sin(x+π/2) is a decent Approximation if you ask me
nah no one asks
I don't want to use sine
sin(x-π/2) = -cos(x), not cos(x)
You are implicitly using e^ix, which itself encodes the desired results from Euler's formula.
I thought so too, but 'i' is not being used as the imaginary unit. It comes from the summation.
I think part of the reason why the quadratic in the exponent helped in making the cos function approximation is because of the jacobi theta function. Basically the third order jacobi theta function is θ₃(z, q) = Σ[n=-∞,∞](q^(n²) * e^(2niz)). When z = 0, the imaginary part (e^(2niz)) disappears, so we get θ₃(0, q) = Σ[n=-∞,∞](q^(n²))., splitting this into two sums θ₃(0, q) = q^(0²) + Σ[n=1,∞](q^(n²)) +Σ[n=-1,-∞](q^(n²)) = 1 + Σ[n=1,∞](q^(n²)) +Σ[n=-1,-∞](q^(n²)), notice by symmetry of the sum ((-n)² = n²), Σ[n=1,∞](q^(n²)) = Σ[n=-1,-∞](q^(n²)), therefore we have θ₃(0, q) = 2*Σ[n=1,∞](q^(n²)) + 1, therefore Σ[n=1,∞](q^(n²)) = (θ₃(0, q) - 1)/2. Then for a lot of these sums that appear we can just express the sum in terms of the θ function. By substitution and rearranging we can express many if not all the sum terms in terms of the theta function. The jacobi θ function is essentially an elliptic analogue of the exponential and does exhibit quasi-double periodicity, basically it means that the periodicity goes out to two dimensions and only roughly follows the periodic nature so f(z+u) and f(z+v) may not equal f(z) exactly (for this u and v are linearly independent), but there still follows a trend. Though because the imaginary part is removed it is only singly quasi-periodic hence yielding the cos approximation. Sorry if I made any mistakes make sure to tell me. en.wikipedia.org/wiki/Doubly_periodic_function mathworld.wolfram.com/JacobiThetaFunctions.html en.wikipedia.org/wiki/Quasiperiodicity
I literally just learned in my math class yesterday lol
@@John-cl8iv2 Oh cool what class is that?
@@BaukBauk9491 Wait never min I learned a Jacobian in calc 3
That was a nice digestible explanation. Well done
@@BaukBauk9491 you learn about theta functions in complex analysis right?
cosinus truly was the euqations we made along the way
Wait until bro discovers taylor series 💀💀💀
hahah literally what i thought
Dumbass he literally said it in the beginning
True but just think about how the numbers involved here don't blow up. This can produce an amazing aproximation with as little as 6 terms to infinity with wrapping.
@@Tabu11211 6 terms only makes this approximation valid for around |x|
@@gamerboy7224 please do because I might be missing something. What I did to extend it was this: (x - (2pi × floor(x/(2pi)))/pi. Replace x with that and it will use the single cycle for the whole domain.
4:36 bro the music is pi!!! that's how i memorize it so i recognized immediately, this is awesome!
after you zoomed out at 0:12 ... i instantly went y=0 will do xD
Bro went from "sooo so close" to an entire mathematic documents that exist
Obviously, very trivial stuff really
Absolute cinema
close enough, welcome back Ramanujan
unbelievable work.
im to dumb to understand the process but looks like you made a hard work on this one 🔥🔥🔥
YOU FINALLY GOT A VIDEO THAT WENT SEMI VIRAL YESSSS
I was thinking this might have applications to like work out cos quickly without a calc until I saw the final equation XD
it feels like a taylor expansion anyways but in the most roundabout way possible
Instead of adding parabolas in the beginning, you could multiply them
y = (1-(x/0.5pi)²)*(1-(x/1.5pi)²)*..
and that will be exact as you add infinite factors...
In desmos, it's really easy to get an approximation of the cosine function: cos(x)
Great work here! To me, it seems that you've derived a quirky Fourier-Poisson approximation with a mainly hyperbolic cosine approximation. I think one of the more concrete places to start would be the complex exponential definition of trigonometry, and approximate that, instead of doing visual approximation. Overall, great job though!
The design of the cosmos (simplified)
Bro just bruteforce the taylor series
no. the taylor series is much less efficient than this
Rung the bell. Love this exploratory chaos.
Divide the input by 2pi and take the remainder. Then you only need to approximate that little range from 0 to 2pi, and every other input will work too. It's what computers actually do when they calculate these functions. It's called modular division and range reduction, and it's used everywhere.
You can actually do more than 2pi because between 0 and pi is symmetrical to the part between pi and 2pi. And between 0 and pi/2 is also symmetrical to the part between pi/2 and pi. That leaves you with a tiny little piece of curve. And if you can approximate it, you get the rest of the values everywhere else.
I tried a quadratic and got -0.35*x^2 -0.1*x -1. Looks pretty close by eye. I'm sure it's possible to do infinitely better of course. Computers can break it up into many segments, with lookup tables for the best fitting polynomial for each segment. You can do even better than that, but lookup tables and polynomials are very fast to compute.
well done!
You might have just accidentally created an actual expansion of cos, especially with using e in your constant (cos(x) = (exp(ix)+exp(-ix))/2 after all).
Also, the second sum of 1 from 1 to R is just equal to R, the sum is unnecessary.
Its because I wanted to be a whole number
Then the floor function might be your go-to ig
@@SilentALume you can make the slider go up by 1s
@@SilentALume use the “step” option when you get into the slider range editor and make it 1 is what the above meant.
another video with some more explanation of the process/your thinking would be awesome!
The co-sine function
"tHe cOS fUnCtiOn" it sent vibrations down my spine
Very nice!!
your last term can just be simplified as "R" you don't need the sumation of 1 from 1 to R
2:44 this is when shit gets serious
u should do some exploration on y=sqrt(24x+1) there are beautiful patterns in the primes of the function.
I have no clue how you did anything, but this is the type of smart I aspire to be
4:55 You never needed pi to go over the circle though...
Truly remarkable 👏🏻
Just use taylor’s formula ?
hey! great vid, just want to ask what is the program you are using to annotate/draw and make text boxes during the timelapse?
Thank you.
Taylor series: Really bro?
Man was I a fool to think that when I clicked on this video it was gonna be about anything I understand.
Subscribed just because of this xD
the (D^ix + D^-ix) /2 type stuff you have going on in there is literally just the definition of cos for D=e. its not exactly what you have but something like that is going on there
from math import pi
def cos_approximations(x):
R=10
total=0
e=2.71828182846
D=e/2
for n in range(1,R+1):
d1=-D**(-(x/pi+1-2*n)**2)
d2=D**(-(-x/pi+1-2*n)**2)
d3=2*D**(-(2*n-1)**2)
denom=0
for i in range(1,R+1):
NUM=D**(8*i-4)+1-2*D**(4*i-1)
DENOM=2*D**(4*i*i)
denom+=NUM/DENOM
total+=(d1-d2+d3)/denom+1/R
return total
print(cos_approximations(1)) #--> 0.5403023059153995
translating bros formula in python code 💀
also, this is what the original cos sin functions look like in python code:
from math import pi, factorial
def cos(x):
x=x%(2*pi)
total=0
for n in range(10):
total+=((-1)**n * (x**(2*n)))/(factorial(2*n))
return total
def sin(x):
x=x%(2*pi)
total=0
for n in range(10):
total+=((-1)**n * (x**(2*n+1)))/factorial(2*n+1)
return total
bros video actually blew up
It seemed quite obvious that what you was actually doing was to generate an alternative (and much more complicated) way to express the Taylor series?
I clicked on this video expecting to see a fast approximation, useful for things like games/computer graphics, and in the beginning, this seemed like what you were doing, but then wham! 🙂
Thats impressive and also genius
this is insane
also another interesting way to do this kind of thing is with the bell curve. i may be wrong am an engineer student
as people probably already said. the cos function is very closely related to the exponential function. And you probably made the vanishing part of the gaussian go to zero with the limits
Fun desmos tip is you can make functions like
T(x. y, y) = x * y * u
Or without the static-like function
x1 = 2
y1 = 5
u = 4
T = x1 * y1 * u
bro forgot that adding parabolas gives you back a parabola
Good job
Hi Richard
Ok this is cool, but by using the exponential function to approximate a trigonometric function (cosine), aren't you effectively approximating a trigonometric function with a trigonometric function? Since cos(x)=(1/2)(e^(ix)+e^-(ix)) and your equation looks oddly similar, in the sense that it is in the form of (the sum of) (e^f(x) - e^g(x))/c + some error, where the error decreases as I increases. I don't feel like attempting to prove it, but this looks like some sort of taylor expansion of Euler's formula.
I might be wrong of course.
good luck. i couldnt even find out the sin function
"if R goes to infinity what is D?"- SilentALume
instead of writing the sum of 1 for "I" that goes from 1 to R (in the denominator of the last term), you could've just written R
2:48 Yoo, is that the fibonacci music?
that wasn’t obvious for me
Didn't expect to see you here
One thing i made to approximate cosine with fast computationally is. This works because its fitted to the first quartile range of a cosine function it's accurate to 0.1%
a = mod(x,1)-0.5
c = mod(x,2)-a-1
b = a^2(4.897-3.588*a^2)-1
Then graph 2b*c
huh
U must me genius af
You can't just use pi as a unit for determining your video lengths 😭😭😭😭
Are you familiar with CORDIC? It is much faster on a computer than this series, even though it is a very cool series!
1-mod(x,2π)³/6+mod(x,2π)⁵/120-mod(x,2π)⁷/5040 would probably be better
personally i like real(i^(x/π/2)) as an approximation of cos
sine is abbreviated to sin but not pronounced like sin. It rhymes with pine
bro was DESPERATE
good shit
If youre raising e to imaginary powers then youre no longer approximating cos. By eulers identity we can derive cosine as exactly equal to: cos(x) = (e^ix + e^-ix)/2
So using complex exponents is kinda cheating
wait until this guy finds out about taylor series
Pi song in the background :sob:
cinema
approximate it with a forrier series
plugging this instead of trig functions, to avoid trig in pre calc
@@thomasbeaumont3668 lol
Hay man this is really amazing. I actually have an interesting proposition for you. So I have this idea on how to find the inverse of y = x +sinx , but it requires me to replace one function with a different function having the same graph at one step. And the idea isn't about replacing the function at the start too. I cant explain it here. Maybe the idea is good maybe its not, I think it needs some work so I wanna share it with you. Is there some way I could get in contact with you?
If this does work by some miracle I think we would be closer to solving the Kepler equation. Its atleast worth a shot.
If you want to contact me you can go to discord and put this "SilentALume" in discord so you can dm me and we can get a call.
Just use Sin
Here's a fun question: How can you prove your function is similar to cos(x) without desmos? 🙂
This is true mathematics, bravo
This is what i do for fun
1:52
Bell-shaped function
en.wikipedia.org/wiki/Bell-shaped_function
Subtract them and multiply that difference by 1000 to see the micro error
he zoomed in
all I need to approximate the sin function is to pull up to the crib w some baddies. The whole function be sinning.
Interesting
'use the long method'
cool
Going around a circle using e
Why do you have the sum from I=1 to R of 1? That is just R.
Maclaurin and Taylor: 💀
bro has not heard of a taylor series before
0:28
are you swiftie?
Whats the song called? Cant find it with Shazam
Is it exactly the same as cos(x)?
@@bluecherry586 its off by a very small decimal
@@bluecherry586 but it might be possible
@@bluecherry586 it's just when R is to infinity I would have to find the best number for D to = the cos function
your function literally uses euler's formula. if you would look into euler's formula, you would pretty much immediately just come up with (e^(ix)+e^(-ix))/2 = cos(x). takes less than 10 mins to figure that out.
but hey, congrats atleast for fiinding a way more complicated way of writing that equasion.
Nope, it has nothing to do with euler's formula. Instead, it's related to the Jacobi theta functions.
@@mismis3153 If you look at the variables, this is a summation of fractions of exponents of D to imaginary values with D being e/2. euler's formula. I bet you could simplify it to (e^(ix)+e^(-ix))/2
@@velocity_raptor you can't... the expression is completely real valued, and the exponents are always squared.
The only way to find a closed form is to use... the jacobi theta functions. in the numerator, the first 2 sums can be expressed as theta_3, the 3rd as theta_2. The one in the denominator as theta_4 and the rightmost sum simplifies to just 1. It's long and messy but I can send you the entire expression if you are interested.
so what's your class in critical legends
Radial basis functions?
So, there’s another way of doing this.
So cos X in degrees is SQRT(1- sin^2 X)
Can we deduce the sin X, no we can’t however we know the sin X = chord 2X/2
And we can derive the chord of X rather easily. So for any circle of radius 1 the chord of 180° = 2 and the chord of 60° = 1. As it turns out also the chord of 108° is the magic number. This gives the sine of 54°. This flips to give the sine of 36° as cosine of 54°. Which then gives the chord of 72°.
These three components can be used to calculate the sin of every angle between 0 and 90° at a spacing of 1.5° again to the precision of whatever device you are using.
In fact this is an artificial degree limitation we put on the chord. However we can continue to define a circle (sin, cos) at 0.75°, 0.375° and so on intervals. As we do this we can define a circle to the point where cos = 1 for the smallest degreed angle above zero in the Markov chain. For 8 digit resolution it’s about 13 iterations.
So let’s think about this mathematically. If we halve say 180° -> 90° and so on until we get to the point that the bisectors of 2x = 1 and x = 1, a x° very small angle, then we can argue that the function in chord x between 2x and x is indistinguishable from a strait line.
So let’s say we wanted to know the chord of 140.25 degrees. Well first step is counter intuitive but we find our range of computational linearity. On this device, an IPad that is achieved at an angle between 0.00000067055225372315° and its double. Next we find 140.25 /90 = 1.55833333333333. Then multiply the two to get 0.00000104494392871857° on this particular device.
And so at this point, having done no trigonometry at all we can use a Markov chain to estimate the sin of 70°. We need one other thing which brings the trig back into the equation, that is pi/180°. Chord X = X * pi/180° when X is below 2E-6°.
Our formula is Chord 2x = 2 * Chord * Bisector X
The bisector X = SQRT(1 - chord^2 x/4)
Let’s see what happens.
The result of the Markov chain is
Estimate of 70.125. cos° = 0.339969239730994 sin° 0.940436556093355
Computer value . cos° = 0.339969239730981 sin° 0.940436556093355
It should be noted that the angle returned was 140.250000000002
This was not optimized and I probably could get a more precise estimate if I assumed that the sin x was linear for larger values of X. In addition if we asked the basic question, what is the precision of a graph of say .10 meter in radius is at best 10 microns. So the error in this case tolerated is 10E-5, in other words I have 10 magnitudes more precision than required to complete the problem.
I should note that I am not inventing new math here. The plimpton 322 triples essentially defined points on the outside of a circle around 1800 BCE, about the same time the square root of 2 was formalized. With Pythagoras, our not so friendly cultic mathematician, we could define a circle in terms of 180, 165, 150, 135, 120, 105, 90, 75, 60, 45, 30, 15°, and 0° rather easily. With ptolemies quadrilaterals we can get this to every third angle in terms of chords. We can mathematically readily estimate the chord of 40° which in all its permutations gives chords to 1° and signs and cosines to 1/2 degree. Which is usually enough.
There is something I forgot to mention. We can also plot direct onto the graph using chord if we don’t want to use sines or cosines as an intermediate.
Just wondering -- why use sum_{I=1}^R 1 when you could just have written R?
Just use the Taylor series lol
let's see the Sine function