This is actually how differential forms works, where we define the derivative operator as df=\sum_{n}\partial_{x_{n}}f\wedge dx_{n} where each x_{n} is a basis vector of sorts and dx_{n} is also a basis vector of sorts, but over an exterior algebra, and the wedge product is takes two elements and produces an orientated area. If we only have one basis vector, we would end up with df=f’(x)\wedge dx, which is effectively saying that dx^{2}=0, meaning that we can replace the wedge product with standard multiplication, giving us df=f’(x)dx, we can also integrate over a surface, giving us \int_{Ω}df=\int_{Ω}f’(x)dx which is basically the anti-derivative.
This kid has no idea what he is talking about. He had never heard of differential forms let alone the subject of analysis, topology, measure theory, and rigorous proof, he proboably read an American middle school calculus textbook and I can see that he is a ph*sycist which explains his lack of logical detail and depth of understanding. This is a common trait of ph*sics fans getting too confident about their mathematical understanding. Please, stick to calculating the speed of a frictionless cube on an inclined plane with no air resistance… don’t post videos like this and not expect to be made fun of 😂😂😂
First I'm not a physicist... yet... Second I've been studying more maths than physics on an Italian real analysis textbook. Then if "my" reasoning doesn't work for you I think you should watch the vudeo again and understand that what I'm doing IS (and I'm 100% sure about that) CONSISTENT and RIGOUROUS!😅🤣🤣🤣 Then yes I certainly know nothing compared to both of you but I'm young and respect those who know more than me. I'm not arguing (because I can't) about these differential forms because I've never studied them before. But please understand that what I'm talking about is 100% TRUE!😉🤗
Can someone tell me who started to interpret "fractions" and "ratios" as different things? It makes no sense to do that!!! And we actually do NOT do that already. There is a thing called Field of Fractions of an integral domain So, please, tell me, who was the genius that decided it was a good idea to use "ratio" for π/e, (x²+1)/x, etc ... ? Also, 2/3 is a fraction, but 2k/3k is a ratio, for k non-integer ... Wow, that's good terminology.
Well I don't really know why... Probably because fractions generate Rational numbers while ratios in general just mean division???🤔🤔🤔 I need to find it out!!!😅🤣
Ratios are vectors in essence (they are tuples with an unboundead number of components) fractions on the other hand are bounded by being pairs, furthermore if we interpret fractions as rational number we can certainly say that every fraction define a ratio, but not every ratio defines a fraction. Rationals are equivalences classes that's why taken t fractions with the same ratio they are the same rational number
Yeah right, why do we call some numbers integers just because they happen to have no decimal places and others are called real just because they happen to miss some i part. Just call them all numbers, right?
I really don't get why people try to argue that this interpretation is some how "Invalid" or "non-rigorous". I get that the epsilon delta definition of the derivative exist, but who said we only can have one way of thinking of the derivative?
About the claim: this is rigorous. Let me make a comparison with another context: if you have a differential equation that on one side specifies taking the square root of that whole side, then the first step to begin solving that equation is to square both sides. That operation is likely to introduce a spurious solution. Chances are you end up with two solutions, and you have to check against the original equation to see which solution satisfies the original equation. Returning to dy/dx: The concern, I assume, is: shifting between treating as a ratio and treating as an operator may under specific circumstances allow a wrong result to slip in. The example that you discuss shows that the method will give the correct result in many cases. The question is: under any circumstances, is it certain a wrong result cannot slip in? As an example of a wrong result that slips in: "proofs" that 1=0
I'm not sure what the point of the middle of the video was. Showing that dy = dx only works if y = x. And then you didn't use that equality anywhere anyway. Seems to just confuse the matter.
@APaleDot That argument was needed to explain why we replace Δx with dx!!! Sorry, I wasn't that clear so I made another video about it (~2 minutes) where I explain why dx=Δx 😉🤗
Yes, but not because the dx's cancel out😅🤣 Btw, I don't know why I got obsessed with differentials but I came to the conclusion that physicists' prospective is always worth considering😅
wait you’re left handed? me too, I considered getting myself a whiteboard for fun but I thought I would definitely smudge all over the board when I write lol (nice video btw)
Thanks!!! Yes dint worrying won't happen if you don't touch it!!! And honestly writing on a whiteboard isn't like writing in paper so your hand won't touch it!!! I recommend a whiteboard because it enables you to do maths in a cool and effective way!!!😉🤩
In fact dx does NOT approach 0. It's just a common Δx. I write dx unstead of Δx because the differential dx of f(x)=x is 1 hence df=dx since the function is x we might as well adopt this notation and this equals to Δx since f'(x)=1
Forget this dx to be the same notation in integrals.... The dx for integrals is there because it tells us what variable are integrating with respect to
I do not agree with the statement: we can assume the differential of the identity function equals the horizontal increment taken for measuring the linear increment of the function, there are orders of infinitesimals and infinites therefore I could accept that for x that tends to 0 sin(x) is asymptoticaly equivalent to the identity but I would not say the same for x that tends to infinity
For instance the delta(x) in f: x |-> x, could be greater than g: x |-> x*f(x) and that's why the linear increment of the curve its greater in g because vertical increment differences its beeing considered even more negligible conparing it to the denominators. I mean I get that the dx must be equal to the h, what I'm saying its that you didn't justified why it is a constant for every function, I don't think it is the case because you can never find a value for dx neither dy you just get a "ratio" but you cannot determine what part of the expression its dy and what is dx
Thank you for your comment I wasn't that clear... I'm uploading a video now on tiktok where I clarify what I said!!! I hope you understand what I'm saying!!!🤗
It's just 2 minutes long so it is not a long video but remember that these differentials aren't the ones that mean "infinitesimal" like the dx of the integral😅
This video was just awesome. I watched every second of it and i was actually surprised by your explanation! I personally always thought that dy/dx was NOT a fraction, but can be treated as one in equations (such as multiplying both sides by dx) and this can also be easily proved with the chain rule [ua-cam.com/video/Jldm88d68Ik/v-deo.htmlsi=B5R9sGS9GMvftNto] I've never went further besides this, but now it's clear for me that it is in fact rigorous to treat dy/dx as a ratio🎉👍 Also, what program did you use for the thumbnail?👀
Noooo!!!! The dx I've defined is not zero!!!! It's just a number!!! I use dx because the differential (according to my definition) of f=x is Δx so it's convenient to write df=f'(x)dx but df is the linear increment of the function and f'(x) is the derivative (defined using limits) but the Δx -> 0 in the derivative IS NOT THIS dx
dx, dy --> infinitesimals - no dimentional value. Delta x, Delta y --> finite diferences (numerical values). Don't mix them, dy/dy is a limit, dy and dx are useful abstraction tools. Perhaps you have no knowledge of numerical analysis yet.
Firstly the dy and dx I was talking about aren't the dx that appears in the integral sign. I said that you can think of dy/dx as pure notation (describing them as you said) or an operator or in this way where dy and dx even though have the same symbol refer to different things!!! This is a sort of notations trick to have a derivativ# as a ratio of finale quantities!!!😉
Did you like it!!!🤩🤩🤩 I've finally (or at least I hope😅🤣) ended this debate!!!😅🤣😉🤗
This property is the reason why the Leibniz notation is my favorite
Physicists approve😅🤣
Yes you're right so do I!!! Leibniz notation is the BEST!!!😉
This is actually how differential forms works, where we define the derivative operator as df=\sum_{n}\partial_{x_{n}}f\wedge dx_{n} where each x_{n} is a basis vector of sorts and dx_{n} is also a basis vector of sorts, but over an exterior algebra, and the wedge product is takes two elements and produces an orientated area. If we only have one basis vector, we would end up with df=f’(x)\wedge dx, which is effectively saying that dx^{2}=0, meaning that we can replace the wedge product with standard multiplication, giving us df=f’(x)dx, we can also integrate over a surface, giving us \int_{Ω}df=\int_{Ω}f’(x)dx which is basically the anti-derivative.
Cool thanks!!!!😉🤗
This kid has no idea what he is talking about. He had never heard of differential forms let alone the subject of analysis, topology, measure theory, and rigorous proof, he proboably read an American middle school calculus textbook and I can see that he is a ph*sycist which explains his lack of logical detail and depth of understanding. This is a common trait of ph*sics fans getting too confident about their mathematical understanding. Please, stick to calculating the speed of a frictionless cube on an inclined plane with no air resistance… don’t post videos like this and not expect to be made fun of 😂😂😂
if only youtube supported latex
@machine-boy That would be ridiculously incredible!!!🤩
First I'm not a physicist... yet... Second I've been studying more maths than physics on an Italian real analysis textbook. Then if "my" reasoning doesn't work for you I think you should watch the vudeo again and understand that what I'm doing IS (and I'm 100% sure about that) CONSISTENT and RIGOUROUS!😅🤣🤣🤣 Then yes I certainly know nothing compared to both of you but I'm young and respect those who know more than me. I'm not arguing (because I can't) about these differential forms because I've never studied them before. But please understand that what I'm talking about is 100% TRUE!😉🤗
Can someone tell me who started to interpret "fractions" and "ratios" as different things? It makes no sense to do that!!! And we actually do NOT do that already. There is a thing called
Field of Fractions of an integral domain
So, please, tell me, who was the genius that decided it was a good idea to use "ratio" for
π/e, (x²+1)/x, etc ... ?
Also,
2/3 is a fraction, but 2k/3k is a ratio, for k non-integer ...
Wow, that's good terminology.
Well I don't really know why... Probably because fractions generate Rational numbers while ratios in general just mean division???🤔🤔🤔 I need to find it out!!!😅🤣
@@JonnyMath I have never heard a distinction between the two and always thought theh meant the same thing, and I still continue to think so
Ratios are vectors in essence (they are tuples with an unboundead number of components) fractions on the other hand are bounded by being pairs, furthermore if we interpret fractions as rational number we can certainly say that every fraction define a ratio, but not every ratio defines a fraction. Rationals are equivalences classes that's why taken t fractions with the same ratio they are the same rational number
Yeah right, why do we call some numbers integers just because they happen to have no decimal places and others are called real just because they happen to miss some i part.
Just call them all numbers, right?
I really don't get why people try to argue that this interpretation is some how "Invalid" or "non-rigorous". I get that the epsilon delta definition of the derivative exist, but who said we only can have one way of thinking of the derivative?
But apart from that then differentials are build on that definition😅🤣🤣
About the claim: this is rigorous.
Let me make a comparison with another context: if you have a differential equation that on one side specifies taking the square root of that whole side, then the first step to begin solving that equation is to square both sides. That operation is likely to introduce a spurious solution. Chances are you end up with two solutions, and you have to check against the original equation to see which solution satisfies the original equation.
Returning to dy/dx: The concern, I assume, is: shifting between treating as a ratio and treating as an operator may under specific circumstances allow a wrong result to slip in.
The example that you discuss shows that the method will give the correct result in many cases. The question is: under any circumstances, is it certain a wrong result cannot slip in?
As an example of a wrong result that slips in: "proofs" that 1=0
I'm not sure what the point of the middle of the video was. Showing that dy = dx only works if y = x. And then you didn't use that equality anywhere anyway. Seems to just confuse the matter.
@APaleDot That argument was needed to explain why we replace Δx with dx!!! Sorry, I wasn't that clear so I made another video about it (~2 minutes) where I explain why dx=Δx 😉🤗
I read dy=(dy/dx)(dx) -> dy=dy
Yes, but not because the dx's cancel out😅🤣 Btw, I don't know why I got obsessed with differentials but I came to the conclusion that physicists' prospective is always worth considering😅
wait you’re left handed? me too, I considered getting myself a whiteboard for fun but I thought I would definitely smudge all over the board when I write lol (nice video btw)
Thanks!!! Yes dint worrying won't happen if you don't touch it!!! And honestly writing on a whiteboard isn't like writing in paper so your hand won't touch it!!! I recommend a whiteboard because it enables you to do maths in a cool and effective way!!!😉🤩
@@JonnyMath if I actually get one I’m learning the pen switch technique immediately and then do every single bracket with a different colour xd
@@Xponent-nb3he Yes it's nice but I find it easier with thinner markers!😅🤣
So the reasoning is the fact that you found someone else call this dx a number and then you did some calculations lol
Your explanation is confusing, how can you have dy with delta x present when dy is dependent on delta x approaching zero ?
In fact dx does NOT approach 0. It's just a common Δx. I write dx unstead of Δx because the differential dx of f(x)=x is 1 hence df=dx since the function is x we might as well adopt this notation and this equals to Δx since f'(x)=1
Forget this dx to be the same notation in integrals.... The dx for integrals is there because it tells us what variable are integrating with respect to
Great video
Thanks!!!😉🤗
I do not agree with the statement: we can assume the differential of the identity function equals the horizontal increment taken for measuring the linear increment of the function, there are orders of infinitesimals and infinites therefore I could accept that for x that tends to 0 sin(x) is asymptoticaly equivalent to the identity but I would not say the same for x that tends to infinity
For instance the delta(x) in f: x |-> x, could be greater than g: x |-> x*f(x) and that's why the linear increment of the curve its greater in g because vertical increment differences its beeing considered even more negligible conparing it to the denominators. I mean I get that the dx must be equal to the h, what I'm saying its that you didn't justified why it is a constant for every function, I don't think it is the case because you can never find a value for dx neither dy you just get a "ratio" but you cannot determine what part of the expression its dy and what is dx
Thank you for your comment I wasn't that clear... I'm uploading a video now on tiktok where I clarify what I said!!! I hope you understand what I'm saying!!!🤗
It's just 2 minutes long so it is not a long video but remember that these differentials aren't the ones that mean "infinitesimal" like the dx of the integral😅
This video was just awesome. I watched every second of it and i was actually surprised by your explanation! I personally always thought that dy/dx was NOT a fraction, but can be treated as one in equations (such as multiplying both sides by dx) and this can also be easily proved with the chain rule [ua-cam.com/video/Jldm88d68Ik/v-deo.htmlsi=B5R9sGS9GMvftNto]
I've never went further besides this, but now it's clear for me that it is in fact rigorous to treat dy/dx as a ratio🎉👍
Also, what program did you use for the thumbnail?👀
Thanks!!! I've always considered dy/dx as a notation and d/dx as an operator!!! I use Photoshop!!!
It's not regoreus! dx on its own is zero. If you forget it's all about limits, where you will in this way, it's easy to make nonsense out of it.
Noooo!!!! The dx I've defined is not zero!!!! It's just a number!!! I use dx because the differential (according to my definition) of f=x is Δx so it's convenient to write df=f'(x)dx but df is the linear increment of the function and f'(x) is the derivative (defined using limits) but the Δx -> 0 in the derivative IS NOT THIS dx
I define df as f'(x)Δx and it's convenient to replace Δx with dx because of that reason!!!!
dx is not zero on its own.
dx, dy --> infinitesimals - no dimentional value.
Delta x, Delta y --> finite diferences (numerical values).
Don't mix them, dy/dy is a limit, dy and dx are useful abstraction tools.
Perhaps you have no knowledge of numerical analysis yet.
Firstly the dy and dx I was talking about aren't the dx that appears in the integral sign. I said that you can think of dy/dx as pure notation (describing them as you said) or an operator or in this way where dy and dx even though have the same symbol refer to different things!!! This is a sort of notations trick to have a derivativ# as a ratio of finale quantities!!!😉
Well, that approach is everything but rigorous.
What approach isn't rigorous???😅
Just found your channel. Can we be friends ?
Do you also make maths videos??? I read rendoesmath😅🤣