The reason for the historical "log first" version of things was that logarithms made calculations vastly easier by turning multiplication and division into addition and subtraction. We really, really cared about logs a long time before we cared about exponentials because they're incredibly useful in a pre-electronic industrial society. Unfortunately most math teachers don't do a good job of explaining what they are or why they're useful and so a lot of students struggle with them these days.
@@denoww9261no calculators. People would literally carry around books of logarithm tables. They also used slide rules. I'm old enough that I recall seeing log tables published in some of my grandfather's math books and playing with his slide rule.
@@denoww9261exact logs are more cumbersome to calculate, at least for anything besides carefully preselected numbers, than multiplication or division operations. It's a lot like being asked to calculate exact nth roots of anything but perfect nth powers (so square roots of anything but square numbers, cube roots of anything but cubic numbers, etc.). That can get very difficult the higher n goes. So calculators or log tables (or maybe slide rules) are needed. And if what you're asking is instead, "No, I mean why are logarithms useful at all? Why is knowing them helpful?", the answer is that they turn up in all kinds of problems in physics, from navigation, to growth and decay patterns (biology, radiation, etc.), and on and on, and also in pure mathematics.
@@Apostate1970 Oh yeah, no calculators would make logs difficult. They used to have those tables for other functions as well, like the trig functions, right? I'm curious about their usefulness back then as opposed to exponentials - I thought exponentials would go hand-in-hand with logs, but you mentioned in your original comment that logs predated exponentials in public consciousness by a long time. Logs and exponentials both show up in various problems in mathematics, were we just dealing with the more log-centric ones more back then?
on mobile website so i cant reply to replies but multiplication is very long, but addition is comparatively quick, so when you have a log table, you can add the logs and then find the sum to translate back to the product
@@xinpingdonohoe3978 Yes. In fact, the usual answer of ln|x|+c that we use is not actually the full answer. Since the domain of 1/x is two disjoint intervals we can have a different constant on each side. Thus, the full answer is int 1/x dx = {lnx + c1 if x>0 {ln(-x) + c2 if x
@@Ninja20704ln|x|+C is the full answer, with a caveat that it may only apply for either (0, +\infty) or (-\infty, 0), but not both. Indefinite integral is a shorthand for the solution of a differential equation of the sort y' = f(x), and a solution to a differential equation is necessarily only defined in an interval. Therefore, the solution in the positives and the solution in the negatives are entirely separate function sets that happen to have the same formulaic representation if the modulus is used.
This is a great derivation, but I think the question is secretly asking "why is the integral of x^n = x^(n+1)/(n+1) + c, except for exactly one case, when n = -1?" I've definitely had that question before and really wanted to develop an intuition as to why the power rule "approaches" the natural log function. There's a really good answer on Quora or maybe Stack Exchange that does a really good job showing the intuition behind that which other derivations lack.
And in the complex realm, it's the heart of the matter for some fundamental theorems like Cauchy's integral formula, residues and the argument principle.
@@NattiNekoMaid You can reconcile that one, by taking advantage of the constant of integration. Let C = -1/(n + 1) It's a legal move to do this, because C can be any constant that doesn't depend on x. Since n will be a constant in the original expression we're trying to integrate, it's a constant in the x-world which we need. Now the limit becomes: limit as n to -1 of x^(n + 1)/(n + 1) - 1/(n + 1) Combine numerators, since we have a common denominator: [x^(n + 1) - 1]/(n + 1) You could use first principals to do this, but I'll use L'H's rule, since this is an indeterminate form of zero over zero. Take the derivative relative to n for top and bottom. Treat x as a constant when we're taking n-derivatives. d/dn [x^(n + 1) - 1] = d/dn [e^(ln(x)*(n + 1)) - 1] = ln(x)*e^(ln(x)*(n + 1)) = ln(x)*x^(n + 1) d/dn [n + 1] = 1 L'H's rule result, prior to plugging in n: ln(x)*x^(n + 1) Given x not equal to zero, x^0 = 1. Thus, when n=-1, this reduces to ln(x). Exactly as we were expecting.
I've learned about logs and exponentials and derivations too long ago and don't remember if I ever questioned that relationship. Maybe I never saw a demonstration of it before. I love when we have clean, simple demonstrations like that one. When dealing with this particular question (demonstrate that a=b), it makes sense to think exponential when dealing with derivation (here integration) and also when dealing with logs, so a question about integrals and logs rings the exponential bell. Otherwise, if the question was more general (calculate a), this particular variable change might seem non-intuitive...
It is interesting how x^t has one clear rule to integrate but then at t=-1 it is entirely different. What happens when t approaches -1. The integral function should approach ln(x) with possible some constant separating.
It is. Let t = -1 + h, to approach the problem point of t = -1 for very small values of h. After applying the power rule, you'll get: 1/h * x^h + C All we need for C is to make it independent of x. It's OK for it to depend on h. As it turns out, letting C = -1/h will produce an indeterminate form, which we can work with using L'H rule, to show that the entire limit approaches ln(x), reconciling the problem point with the general power rule.
@@carultch I don't think you should have the arbitrary constant depend on the exponent. But if you define F(x) = int(u^(-1+h), u=1..x), then this can be evaluated using the power rule and will approach ln(x) as h goes to 0 as you stated.
@@drslyone That works as well. It turns out that you and I will both get the same result. It is a valid move to make the arbitrary constant a function of the exponent. All that matters is that it is a constant in the x-world. You'll find that 1/h * x^h - 1/h will have a point of tangency at x=1 with ln(x), and will be the definite integral starting on a lower bound of x=1.
@@drslyone Taking an indefinite integral and setting the arbitrary constant to -1/h is equivalent to taking the definite integral and setting the initial bound to 1.
That's a really good question, one that I don't feel the answer really satisfied me. I believe the reasoning for the question because Integral (x^n) dx = [x^(n+1)]/(x+1) + c for all values of n... except -1 (for some reason). Now it's easy to see why it has to be something n+1 = 0 when n = -1, why does it specifically have to be a log function.
There is a way to understand why natural log fills in the gap for the general integration power rule. Let n = -1 + h, and let the constant of integration c be -1/h. What we're doing, is setting the original exponent arbitrarily close to the problem point of -1. Since the constant is arbitrary, all that matters is that it doesn't depend on x, so we set it to a strategic value of h that will allow the limit to be reconciled. This gives us: 1/(-1 + h + 1)*x^(-1 + h + 1) - 1/h Clean this up: 1/h*x^h - 1/h Take the limit as h goes to zero. We get an indeterminate form, so set this up for L'H rule. (x^h - 1)/h dN/dh = d/dh e^(ln(x)*h) - 1 = ln(x)*e^(ln(x)*h) = ln(x)*x^h dD/dh = 1 Construct result: ln(x)*x^h Since x^h can now be evaluated directly at h=0, as long as x is non-zero, this leaves us with ln(x). Exactly as we were expecting.
A more direct way might be to show that adding two integrals of 1/x together can be represented as a single integral, like ln a + ln b = ln (ab). This can be done geometrically exploiting that 1/x = c/(cx). That doesn't prove it's the *natural* logarithm ofc.
I had a similar thought. Putting a finer point on how you said it, if you find the area under the curve from 1 to a^n, it will be n times the area under the cover from 1 to a, which makes it a logarithmic function.
Let's also extend it to all x ≠ 0 For x ≠ 0, Let |x| = e^t, then: |x|/x dx = e^t dt e^t / x dx = e^t dt dx / x = dt (e^t ≠ 0) ∫ (1/x) dx = ∫ dt = t + C = ln |x| + C Note: d/dx(|x|) = d/dx(x) if x > 0; d/dx(-x) if x < 0 = 1 if x > 0; -1 if x < 0 = x/x if x > 0; - x/x if x < 0 = |x|/x
I have wondered this for SO long! I was always disappointed with the Fundamental Theorem of Calculus as a proof because it started with the assumption that d/dx lnx = 1/x. It's circular reasoning. That's why I'm so happy that you made this video. Thank you so much!
In my opinion, the better "why" is that ln(x) really, really, really wants to be zero. It's not, actually, the function x=0, but it tries so hard, that it fits into the whole weird sequence of derivatives and integrals of the polynomials where 0 might fit. It's also a function that is smaller (after a certain point), than ANY linear function you can name, no matter how tiny an 'a' you can put in front of y=ax. Zero is such a poorly behaved number that other things sometimes seem to spring up and take its place in various ways.
You can show that it is a degenerate case of the power rule, where the integration produces division by zero. By strategically assigning the arbitrary constant of integration as a function of the exponent, you can show that it is consistent with the power rule. Here's how. Let h be an arbitrary infinitesimal offset of the exponent. integral x^(-1 + h) dx = 1/h * x^h + C Let C = 1/h. You'll see why. (x^h + 1)/h Take the limit as h approaches zero. This is an indeterminate form of (x^0 - 1)/0, which becomes 0/0, which means we can use L'H's rule. dN/dh = d/dh x^h + d/dh 1 = d/dh x^h dD/dh = d/dh h = 1 Rewrite x^h as e^(ln(x)*h). Take its derivative in the h-world: ln(x)*e^(ln(x)*h) ln(x)*x^h Thus, the limit becomes: ln(x)*x^h / 1 As long as x isn't zero, x^h = 1. Thus, the finished form of the limit is ln(x).
Yes. Essentially you show that the integral over 1/x from a to b is the same as the integral over 1/x from b times c to a times c, i. e. that the integral function F has the property F(b) - F(a) = F(b*c) - F(a*c). From that you can deduce that F has to be a logarithmic function. That's roughly how it was done historically by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647 to 1649. And then you have to calculate the base of that logarithm. As far as I know, that took some further decades. Probably was first done by Roger Cotes in 1714, but surely again some decades later by Euler.
You don't. That's just half the Leibnitz notation, and missing the other half. When you differentiate e^t relative to x, you get (e^t) * dt/dx. You then have to determine whether A) t is a constant relative to x, in which case d/dx e^t = 0, or B) whether x depends on t, and how, in order to determine dt/dx.
Other proof y=e^x;x=ln(y) and further information needed: dy/dx=e^x (very important information!); Also using the information, that dx/dy=1/(dy/x) ... looks easy but proof takes a while.... so dx/dy = 1/(e^x)=1/y or short: dx/dy=1/y or using above notation d(ln[x])/dx=1/x which can be used as an explanation for the requested question...
Technically you could also write out the taylor series expansion for 1/x and integrate that and show it is the same thing as the taylor expansion of ln(x). Kind of circular reasoning given you would already need to know that the derivative of ln(x) is 1/x but still
Is this actually rigorous? The taylor series expansion for 1/x doesn't actually converge for the whole input space. In theory you could imagine a smooth function which is identical to 1/x over the radius of convergence (which i think is always less than 1?) but which is different outside, and you could obviously see that the integral of that function from 0 to x isn't equal to ln x but the taylor series would look the same. I know this is very similar to the taylor series argument for e^it = cos(t) + isin(t) but the taylor series for both of those functions converge over the whole input space so i can't tell if it works the same here.
The substitution feels a little bit like post hoc coincidence. Something you can do with any invertible function is differentiate the inverse relation, f(f^{-1}(x))=x. Substitution is just the chain rule interpreted on the antiderivative side anyway.
@@bprpcalculusbasics What are you talking about? Implicit differentiation applied to the inverse function relation is usually how you compute the derivative of an inverse function, such as arctan. The f'(f^{-1}(x)) part of the chain rule may or may not simplify nicely, but it's a far cry from being a method that only works for exp and log.
As I learned in school the ln(x) simply is the "inverse" of the e^x function, so that one can solve exponential equations by logarithm it...I did not now that ln(x) was *historically* interpreted as an integral, because I know that logarithms were known earlier than integration, isn't it? Even Kepler used log-tables for his calculations (AFAIK), not knowing what "integrals" are. But that's not fully true, because Kepler used techniques which nowadays would be called "numeric integration", so he possibly had an idea where the journey is going... The beauty of the connection of 1/x with ln(x) is, that one can immediately see the slope of the ln(x) function, literally being the recipical at point x, namley 1/x 😃
What he said in the video was not about the definition of a general logarithm (general logarithms were indeed known much earlier), but specifically the definition of the natural logarithm. And that was indeed defined to be the area under the curve 1/x. (Or to be a bit more precise, the function which gave the area under the curve of 1/(1+x) was called the natural logarithm of ln(1+x); that was stated by Mercator in Philosophical Transactions of the Royal Society, Volume 3, Issue 38 in the year 1668.)
@@richardheiville937 If you mean the _natural_ logarithm, that's right. If you mean logarithms in general, that's wrong. (And actually, the word "function" wasn't used in the definition, the concept of a function wasn't very developed in that time.)
I don't think that was the extent or intent of the question. Why is it that for all cases except x^-1, all integrals of the form x^n dx have (n + 1)^1 * x^(n+1) +C as answer? How does it follow that the area under the curve set by 1/x follows ln(x)? Obviously part of the answer is that you would end up with x^0/0 +C if you were to integrate x^-1 with the common primitive rule. So the followup question here is whether ALL integrals of x^n can be expressed as an ln function by a common rule/primitive?
It still doesn't answer the "because it is". The "trick" of replacing e^x can be done with ANY value whose inverse function is defined, thus, creating infinite possibilities. There must be another demonstration.
@@chri-k I think everyone forgets e is just a constant. Technically the derivative of a^x is still a^x so if we let x = 10^t then the answer would be log base 10 of x. I have not proved this but im pretty sure exponential functions always have the same curve so it does not matter what the log is just that the function is a log. I think people just use e and ln because e is a common constant.
@@jamesarreola3921 this is not true. The derivative of a^x is a^x * ln(a) and that ln(a) interferes with things. You can still do this with some other a, but it's way more convenient to use e, because then you don't need to deal with random constant factors e is not just a constant, it is a *very* special constant. The derivative of a^x cannot be defined without using e at some point.
The correct sentence is "Why an antiderivative of the function x->1/x defined on the set of reals stricty positive is the function ln?". What is your definition for x->ln(x)? Usual definition is ln(x)=integral of 1/t from t=1, to x (x>0). An integral is a number, not a function.
In that graph where is t? It's an X, y graph right ? Why is X= e^t? ?????. Where did that come from ? Just _ because lnx =t means e^t=x. Seems circular. Could you not use any base such as 20^t=x or 15^t=x? ???
In all the years of watching this channel, I feel like this is the first time he didn't answer the question. The first part the "let x = e^t" seems like cheating, the second part / the definition also seems like cheating. By cheating I mean it's just like reverse engineering the answer. I'm not even sure what I was expecting, but I think I would be more satisfied if he said it is a huge coincidence, than proving something using the answer, which, BTW, it is exactly what the question said not to do.
In order to do this more rigorously, you need more than a few minutes. E. g. look up the video "Why don't they teach simple visual logarithms (and hyperbolic trig)?" by Mathologer.
The question asks you not to start the proof by stating d/dx lnx = 1/x, as that's circular reasoning. It said nothing about starting from d/dx e^x = e^x, which is usually what you start with to prove the log derivative.
Given: integral sqrt(x) dx from -5 to -1. I'm assuming that -1 is the upper limit Rewrite as a power: sqrt(x) = x^(1/2) Thus, the integrand is x^(1/2). Use the power rule to boost the exponent by 1, and have the reciprocal of the exponent become the coefficient: 2/3*x^(3/2) + C The problem is, the given limits have no real output. We therefore will have complex outputs, when evaluating the result. (-1)^(3/2) = (sqrt(1)*i)^3 = i^3 = -i (-5)^(3/2) = (sqrt(5)*i)^3 = (i*5*sqrt(5))^3 = -5*sqrt(5)*i Subtract, to find the result to the definite integral: i*(5*sqrt(5) - 1) approx 6.787*i Note that it was arbitrary which order we do the square rooting, and cubing, of each limit of integration, so an equally valid solution is also i*(1 - 5*sqrt(5)).
What-if, imagine a world back in time when Calculus is known but people do not yet know about euler's number and the natural log will the integral of 1/x be accepted as "undefined" in such world?
This is a bit of an attack on semantics but I believe “undefined” is only really used when there is no single answer to a problem. When there’s a (potentially infinite) number of solutions to a problem where all are equally valid, there’s no way to define one singular solution or answer and as such its undefined. I believe anyways.
Does the area under the curve of y=1/x, x>0 between x=1 and x=t converge as t approaches 0 (t starting close to 1 and decreasing towards 0)? Or is this divergent?
is it possible to take it in reverse and derivate ln(x) ? call f(x)=ln(x) g(x)=e^f(x) = x -> g'(x) = 1 but g'(x) also equal to f'(x) * e^f(x) = f'(x)*x -> f'(x) = 1/x
Err, yes, that's how one usually gets the derivative of ln(x). But the person asking the question wanted to know how to get the integral of 1/x _without_ first knowing the derivative of ln(x).
@@spicca4601You can show how it fills in the gap of the power rule of integration, by starting with 1/(n + 1) * x^(n + 1) + C. Let n equal h - 1, and let C = -1/h. Take the limit as h goes to zero, and you'll see that this produces ln(x), when x is positive.
Another way you could work out what integral 1/x dx is: Since this is a power function being integrated, why not use the power rule? Let's try, but indirectly. Let h be a slight offset from -1, such that the we integrate x^(h - 1) dx. Use the power rule to integrate, and use K as the arbitrary constant: 1/(h - 1 + 1) * x^(h - 1 + 1) + K Simplify: 1/h * x^h + K Let +K equal -1/h, which will have the cancelling property we desire. 1/h * x^h - 1/h Group bottom and top: (x^h - 1)/h Take the limit as h goes to zero, using L'H's rule: d/dh x^h - 1 = d/dh [e^(ln(x)*h) - 1] = ln(x)*e^(ln(x)*h) d/dh h = 1 Bottom derivative relative to h is just a constant, so it's complete. Evaluate the top derivative at h=0, assuming x isn't zero: ln(x)*e^(ln(x)*0) = ln(x)*e^(0) Result: ln(x) And since our previous +K was arbitrary, add on an unrelated constant, of +C ln(x) + C
You could probably do it using log inequalities to manufacture some kind of squeeze theorem. I'm not sure if those inequalities require the rates of growth of x and lnx in the first place which is basically just lhopital but, oh well
You could do it by setting t = -ln(x) and solve the limit of -t e^-t = -t/e^t for t going to -infinity. And you can get that limit by considering the power series for e^t.
A simpler way: from definition of ln as inverse of exp we have x = exp(ln(x)) differentiate both sides 1 = (ln x)’exp(ln(x)) therefore (ln x)’ = 1/exp(ln(x)) = 1/x
Didn't you watch the video?!? It was stated explicitly right at the beginning that the person asking the question wanted to have an answer which does _not_ use the derivative of ln(x) to get the integral of 1/x.
You can approach it from the other direction and prove the derivative of ln(x) is 1/x. You have an older video on that where you use both the definition of the derivative and implicit differentiation of the exponential function to get the result. By proving it you aren't just saying one is the derivative of the other, which was the concern of the OP. I'm not sure of your history argument, though. That definition is trivial once you understand the relationship from first calculus principles. And I'm pretty sure logarithms were invented well before calculus. When people started playing with logarithms, they weren't studying areas under curves. They noticed a relationship between a number and its index in a table, and that if you added the indices of two numbers and did a reverse table lookup, the resulting number was the product of the two original numbers. That is, logarithms transform multiplication into addition. That's the original history of the logarithm definition as far as I know.
"You can approach it from the other direction and prove the derivative of ln(x) is 1/x." ??? It was stated directly at the beginning of the video that that approach was already known, and the person asking the question wanted to know how one could get that integral _without_ knowing the derivative! "That's the original history of the logarithm definition as far as I know." What he said in the video was not about the definition of a _general_ logarithm, but specifically the definition of the _natural_ logarithm. And that was indeed defined to be the area under the curve 1/x. (Or to be a bit more precise, the function which gave the area under the curve of 1/(1+x) was called the natural logarithm of ln(1+x); that was stated by Mercator in Philosophical Transactions of the Royal Society, Volume 3, Issue 38 in the year 1668.)
The person asking doesn't understand the definitions of derivative (slope of tangent at x) and integral (area under the curve). Their intro calculus teacher has failed them.
hey bprp, you know how in respect to this video, the integral of (lnx)/x dx is ln^2(x)/2 + c? well I want to dare you to find the indefinite integral of x/(lnx) dx (I’ll give you a hint, it’s in the complex world after I checked it on wolframalpha, somehow came up with this at 2 am lol)
Given: integral x/ln(x) dx Let u = ln(x), thus du = 1/x dx. Solve for dx: dx = x*du Rewrite in the u-world as much as we can: integral x/u * x du integral x^2/u Rewrite x^2 in the u-world: x^2 = e^(2*u) Thus, the integral becomes: integral e^(2*u)/u du Multiply by 1 in a fancy way, by producing a 2 out in front, and a 2 downstairs with the u: 2*integral e^(2*u)/(2*u) du Let w = 2*u. Thus, dw = 2*du. Solve for du = dw/2. Thus, the integral becomes: integral e^w/w dw This has no elementary antiderivative, but we define the function Ei(x) to be the integral of e^x/x dx. Thus, the solution so far is: Ei(w) + C Substitute w = 2*ln(x), and get the result: Ei(2*ln(x)) + C Ei = exponential integral.
Because then you prompt the question, of how do you reconcile that with previous ways that you were introduced to the definitions of base e, and logarithms in general? Even though historically, the integral of 1/x was discovered first, it rarely is taught in that sequence today. Usually, introducing logs as inverse exponentials, and introducing e as a limit, and as the base of the special case of the exponential with a self-derivative first. For this reason, the onus of proof is to show how these ideas are consistent with the integral of 1/x.
@@carultch You just used a ton of words to say "we made up a symbol" as the answer. NO! That is not an answer. What is e. What is ln. How do they match? SHOW HOW.
@@carultch not the question. It's not "what's the definition", it is HOW DOES IT WORK. If I make up a symbol, say it is the answer to some hard question, I don't get a Nobel Prize for that.
@@hrayzWelcome to the world of transcendental numbers and functions. It isn't possible to evaluate these, when just using a finite number of steps, using just whole numbers, arithmetic, powers, and roots. The names of the functions and constants, are just stand-ins for the methods computers do use, to calculate them. You can build algorithms for these functions, just using whole numbers, arithmetic, powers, and roots, but it requires an infinite number of steps. Examples being infinite series, and continued fractions. You can look up the details fot the infinite series that calculate logs, exponentials, and the number e, if you care to do so. Computers only evaluate a limited extent of these functions, that is good enough for the precision needed for practical purposes.
not possible. If we want to find solutions to 5 variables, we need at least 5 equations. The variables a to d are coefficients of x^4 down to x respectively and e is a constant
Check out how we define e^x and ln(x) being its inverse: ua-cam.com/video/oBlHiX6vrQY/v-deo.htmlsi=QNPwD1VureMODjO5
The reason for the historical "log first" version of things was that logarithms made calculations vastly easier by turning multiplication and division into addition and subtraction. We really, really cared about logs a long time before we cared about exponentials because they're incredibly useful in a pre-electronic industrial society. Unfortunately most math teachers don't do a good job of explaining what they are or why they're useful and so a lot of students struggle with them these days.
Thanks for the insight! I'm interested to know what made logs so useful in a pre-electronic industrial society?
@@denoww9261no calculators. People would literally carry around books of logarithm tables. They also used slide rules. I'm old enough that I recall seeing log tables published in some of my grandfather's math books and playing with his slide rule.
@@denoww9261exact logs are more cumbersome to calculate, at least for anything besides carefully preselected numbers, than multiplication or division operations. It's a lot like being asked to calculate exact nth roots of anything but perfect nth powers (so square roots of anything but square numbers, cube roots of anything but cubic numbers, etc.). That can get very difficult the higher n goes. So calculators or log tables (or maybe slide rules) are needed.
And if what you're asking is instead, "No, I mean why are logarithms useful at all? Why is knowing them helpful?", the answer is that they turn up in all kinds of problems in physics, from navigation, to growth and decay patterns (biology, radiation, etc.), and on and on, and also in pure mathematics.
@@Apostate1970 Oh yeah, no calculators would make logs difficult. They used to have those tables for other functions as well, like the trig functions, right?
I'm curious about their usefulness back then as opposed to exponentials - I thought exponentials would go hand-in-hand with logs, but you mentioned in your original comment that logs predated exponentials in public consciousness by a long time. Logs and exponentials both show up in various problems in mathematics, were we just dealing with the more log-centric ones more back then?
on mobile website so i cant reply to replies but multiplication is very long, but addition is comparatively quick, so when you have a log table, you can add the logs and then find the sum to translate back to the product
In the case then x
What do you mean by a different + C? Are you just saying there's no obligation to have both sides added with the same value of constant?
@@xinpingdonohoe3978 Yes. In fact, the usual answer of ln|x|+c that we use is not actually the full answer. Since the domain of 1/x is two disjoint intervals we can have a different constant on each side. Thus, the full answer is
int 1/x dx = {lnx + c1 if x>0
{ln(-x) + c2 if x
@@Ninja20704 that much is obvious, I was just a bit confused by the vague wording.
@@Ninja20704ln|x|+C is the full answer, with a caveat that it may only apply for either (0, +\infty) or (-\infty, 0), but not both. Indefinite integral is a shorthand for the solution of a differential equation of the sort y' = f(x), and a solution to a differential equation is necessarily only defined in an interval. Therefore, the solution in the positives and the solution in the negatives are entirely separate function sets that happen to have the same formulaic representation if the modulus is used.
@vladislav_sidorenko or you can just say that C is a locally constant function
This is a great derivation, but I think the question is secretly asking "why is the integral of x^n = x^(n+1)/(n+1) + c, except for exactly one case, when n = -1?"
I've definitely had that question before and really wanted to develop an intuition as to why the power rule "approaches" the natural log function. There's a really good answer on Quora or maybe Stack Exchange that does a really good job showing the intuition behind that which other derivations lack.
Indeed, check the SE «Demystify integration of ∫1/x.dx»
I tried pasting the url but it got deleted.
And in the complex realm, it's the heart of the matter for some fundamental theorems like Cauchy's integral formula, residues and the argument principle.
Yeah, the limit of n->-1 of x^(n+1)/(n+1) goes to +/- infinity on either side, and is therefore undefined in the middle.
@@NattiNekoMaid You can reconcile that one, by taking advantage of the constant of integration.
Let C = -1/(n + 1)
It's a legal move to do this, because C can be any constant that doesn't depend on x. Since n will be a constant in the original expression we're trying to integrate, it's a constant in the x-world which we need.
Now the limit becomes:
limit as n to -1 of x^(n + 1)/(n + 1) - 1/(n + 1)
Combine numerators, since we have a common denominator:
[x^(n + 1) - 1]/(n + 1)
You could use first principals to do this, but I'll use L'H's rule, since this is an indeterminate form of zero over zero. Take the derivative relative to n for top and bottom. Treat x as a constant when we're taking n-derivatives.
d/dn [x^(n + 1) - 1] = d/dn [e^(ln(x)*(n + 1)) - 1] = ln(x)*e^(ln(x)*(n + 1)) = ln(x)*x^(n + 1)
d/dn [n + 1] = 1
L'H's rule result, prior to plugging in n:
ln(x)*x^(n + 1)
Given x not equal to zero, x^0 = 1. Thus, when n=-1, this reduces to ln(x). Exactly as we were expecting.
maybe because for the x^n formula, which you gave, to exist in the first place, we have to assume n!=-1 otherwise division with 0
I've learned about logs and exponentials and derivations too long ago and don't remember if I ever questioned that relationship. Maybe I never saw a demonstration of it before. I love when we have clean, simple demonstrations like that one. When dealing with this particular question (demonstrate that a=b), it makes sense to think exponential when dealing with derivation (here integration) and also when dealing with logs, so a question about integrals and logs rings the exponential bell. Otherwise, if the question was more general (calculate a), this particular variable change might seem non-intuitive...
It is interesting how x^t has one clear rule to integrate but then at t=-1 it is entirely different. What happens when t approaches -1. The integral function should approach ln(x) with possible some constant separating.
It is. Let t = -1 + h, to approach the problem point of t = -1 for very small values of h. After applying the power rule, you'll get:
1/h * x^h + C
All we need for C is to make it independent of x. It's OK for it to depend on h. As it turns out, letting C = -1/h will produce an indeterminate form, which we can work with using L'H rule, to show that the entire limit approaches ln(x), reconciling the problem point with the general power rule.
@@carultch I don't think you should have the arbitrary constant depend on the exponent. But if you define F(x) = int(u^(-1+h), u=1..x), then this can be evaluated using the power rule and will approach ln(x) as h goes to 0 as you stated.
Indeed, check the SE «Demystify integration of ∫1/x.dx»
I tried pasting the url but it got deleted.
@@drslyone That works as well. It turns out that you and I will both get the same result.
It is a valid move to make the arbitrary constant a function of the exponent. All that matters is that it is a constant in the x-world.
You'll find that 1/h * x^h - 1/h will have a point of tangency at x=1 with ln(x), and will be the definite integral starting on a lower bound of x=1.
@@drslyone Taking an indefinite integral and setting the arbitrary constant to -1/h is equivalent to taking the definite integral and setting the initial bound to 1.
Mathologer's visual logs video is really informative on this topic.
I've had the same question for a while now thank you for explaining it out so thoroughly.
That's a really good question, one that I don't feel the answer really satisfied me. I believe the reasoning for the question because Integral (x^n) dx = [x^(n+1)]/(x+1) + c for all values of n... except -1 (for some reason). Now it's easy to see why it has to be something n+1 = 0 when n = -1, why does it specifically have to be a log function.
There is a way to understand why natural log fills in the gap for the general integration power rule. Let n = -1 + h, and let the constant of integration c be -1/h. What we're doing, is setting the original exponent arbitrarily close to the problem point of -1. Since the constant is arbitrary, all that matters is that it doesn't depend on x, so we set it to a strategic value of h that will allow the limit to be reconciled. This gives us:
1/(-1 + h + 1)*x^(-1 + h + 1) - 1/h
Clean this up:
1/h*x^h - 1/h
Take the limit as h goes to zero. We get an indeterminate form, so set this up for L'H rule.
(x^h - 1)/h
dN/dh = d/dh e^(ln(x)*h) - 1 = ln(x)*e^(ln(x)*h) = ln(x)*x^h
dD/dh = 1
Construct result:
ln(x)*x^h
Since x^h can now be evaluated directly at h=0, as long as x is non-zero, this leaves us with ln(x). Exactly as we were expecting.
A more direct way might be to show that adding two integrals of 1/x together can be represented as a single integral, like ln a + ln b = ln (ab). This can be done geometrically exploiting that 1/x = c/(cx). That doesn't prove it's the *natural* logarithm ofc.
I had a similar thought. Putting a finer point on how you said it, if you find the area under the curve from 1 to a^n, it will be n times the area under the cover from 1 to a, which makes it a logarithmic function.
That's essentially how it was actually done historically by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa.
Let's also extend it to all x ≠ 0
For x ≠ 0,
Let |x| = e^t, then:
|x|/x dx = e^t dt
e^t / x dx = e^t dt
dx / x = dt (e^t ≠ 0)
∫ (1/x) dx
= ∫ dt
= t + C
= ln |x| + C
Note:
d/dx(|x|)
= d/dx(x) if x > 0; d/dx(-x) if x < 0
= 1 if x > 0; -1 if x < 0
= x/x if x > 0; - x/x if x < 0
= |x|/x
This is also strange. At first glance the absolute value looks like some gimmick just added as one cannot have negative values.
@@okaro6595 It's a clever way to express 1 for x > 0 and -1 for x < 0.
I have wondered this for SO long! I was always disappointed with the Fundamental Theorem of Calculus as a proof because it started with the assumption that d/dx lnx = 1/x. It's circular reasoning. That's why I'm so happy that you made this video. Thank you so much!
You can do it from other end:
de^t/dt=e^t
e^t=x, t=lnx
dx/dlnx=x
dlnx/dx=1/x
its not an assumption, its a definition
@@honounome Doesn't matter. If so, you should proof e^(lnx)=x.
It’s not circular reasoning. d/dx ln(x) = 1/x can be derived with many methods like the limit def and implicit differentiation
@@user-dh8oi2mk4f You're using a differential to prove an integral. How is that not circular?
You can also do lim[n->-1] (int(x^n dx))
This leads to the identity: ln(x) = lim[n -> 0] ((x^n - 1)/n), easily proven with L’H
What is your définition of x->a^x if a is any positive real (especially if a is not a rational)?
@@richardheiville937 What do you mean?
In my opinion, the better "why" is that ln(x) really, really, really wants to be zero. It's not, actually, the function x=0, but it tries so hard, that it fits into the whole weird sequence of derivatives and integrals of the polynomials where 0 might fit. It's also a function that is smaller (after a certain point), than ANY linear function you can name, no matter how tiny an 'a' you can put in front of y=ax. Zero is such a poorly behaved number that other things sometimes seem to spring up and take its place in various ways.
You can show that it is a degenerate case of the power rule, where the integration produces division by zero. By strategically assigning the arbitrary constant of integration as a function of the exponent, you can show that it is consistent with the power rule.
Here's how. Let h be an arbitrary infinitesimal offset of the exponent.
integral x^(-1 + h) dx = 1/h * x^h + C
Let C = 1/h. You'll see why.
(x^h + 1)/h
Take the limit as h approaches zero. This is an indeterminate form of (x^0 - 1)/0, which becomes 0/0, which means we can use L'H's rule.
dN/dh = d/dh x^h + d/dh 1 = d/dh x^h
dD/dh = d/dh h = 1
Rewrite x^h as e^(ln(x)*h). Take its derivative in the h-world:
ln(x)*e^(ln(x)*h)
ln(x)*x^h
Thus, the limit becomes:
ln(x)*x^h / 1
As long as x isn't zero, x^h = 1. Thus, the finished form of the limit is ln(x).
Is there a more intuitive route where we don’t already know to use e^t?
Yes. Essentially you show that the integral over 1/x from a to b is the same as the integral over 1/x from b times c to a times c, i. e. that the integral function F has the property F(b) - F(a) = F(b*c) - F(a*c). From that you can deduce that F has to be a logarithmic function. That's roughly how it was done historically by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647 to 1649.
And then you have to calculate the base of that logarithm. As far as I know, that took some further decades. Probably was first done by Roger Cotes in 1714, but surely again some decades later by Euler.
Question, why when we differentiate e^t with respect to x we get e^tdt instead of de^t?
You don't. That's just half the Leibnitz notation, and missing the other half.
When you differentiate e^t relative to x, you get (e^t) * dt/dx. You then have to determine whether A) t is a constant relative to x, in which case d/dx e^t = 0, or B) whether x depends on t, and how, in order to determine dt/dx.
Other proof y=e^x;x=ln(y) and further information needed: dy/dx=e^x (very important information!); Also using the information, that dx/dy=1/(dy/x) ... looks easy but proof takes a while.... so dx/dy = 1/(e^x)=1/y or short: dx/dy=1/y or using above notation d(ln[x])/dx=1/x which can be used as an explanation for the requested question...
Very informative. Thank you for the cool history knowledge
Technically you could also write out the taylor series expansion for 1/x and integrate that and show it is the same thing as the taylor expansion of ln(x). Kind of circular reasoning given you would already need to know that the derivative of ln(x) is 1/x but still
Is this actually rigorous? The taylor series expansion for 1/x doesn't actually converge for the whole input space. In theory you could imagine a smooth function which is identical to 1/x over the radius of convergence (which i think is always less than 1?) but which is different outside, and you could obviously see that the integral of that function from 0 to x isn't equal to ln x but the taylor series would look the same.
I know this is very similar to the taylor series argument for e^it = cos(t) + isin(t) but the taylor series for both of those functions converge over the whole input space so i can't tell if it works the same here.
Taylor expansion of x->1/x at x=0? seriously?
It was never said you'd take the Taylor expansion at x = 0... Just that youd take the Taylor expansion. You could take it at x = 1 or 2 or anything.
The substitution feels a little bit like post hoc coincidence. Something you can do with any invertible function is differentiate the inverse relation, f(f^{-1}(x))=x. Substitution is just the chain rule interpreted on the antiderivative side anyway.
It wouldn’t work with any other functions.
@@bprpcalculusbasics What are you talking about? Implicit differentiation applied to the inverse function relation is usually how you compute the derivative of an inverse function, such as arctan. The f'(f^{-1}(x)) part of the chain rule may or may not simplify nicely, but it's a far cry from being a method that only works for exp and log.
As I learned in school the ln(x) simply is the "inverse" of the e^x function, so that one can solve exponential equations by logarithm it...I did not now that ln(x) was *historically* interpreted as an integral, because I know that logarithms were known earlier than integration, isn't it? Even Kepler used log-tables for his calculations (AFAIK), not knowing what "integrals" are. But that's not fully true, because Kepler used techniques which nowadays would be called "numeric integration", so he possibly had an idea where the journey is going...
The beauty of the connection of 1/x with ln(x) is, that one can immediately see the slope of the ln(x) function, literally being the recipical at point x, namley 1/x 😃
Good point. If (ln x)'= 1/x, then ln x is increasing but at a slower and slower rate.
What he said in the video was not about the definition of a general logarithm (general logarithms were indeed known much earlier), but specifically the definition of the natural logarithm. And that was indeed defined to be the area under the curve 1/x. (Or to be a bit more precise, the function which gave the area under the curve of 1/(1+x) was called the natural logarithm of ln(1+x); that was stated by Mercator in Philosophical Transactions of the Royal Society, Volume 3, Issue 38 in the year 1668.)
People defined first log using x->1/x function. It's easy if you have the concept of area.
@@richardheiville937 If you mean the _natural_ logarithm, that's right. If you mean logarithms in general, that's wrong.
(And actually, the word "function" wasn't used in the definition, the concept of a function wasn't very developed in that time.)
I remember just accepting it as how you maintain a variable past a 1 exponent
I don't think that was the extent or intent of the question. Why is it that for all cases except x^-1, all integrals of the form x^n dx have (n + 1)^1 * x^(n+1) +C as answer? How does it follow that the area under the curve set by 1/x follows ln(x)? Obviously part of the answer is that you would end up with x^0/0 +C if you were to integrate x^-1 with the common primitive rule. So the followup question here is whether ALL integrals of x^n can be expressed as an ln function by a common rule/primitive?
It still doesn't answer the "because it is". The "trick" of replacing e^x can be done with ANY value whose inverse function is defined, thus, creating infinite possibilities. There must be another demonstration.
?
it has to be e^t. otherwise, the x and dx don't cancel after u-subbing.
(Well, except in the case where x=0. But 1/x where x = 0 is illegal anyway.)
It cannot. This works because e^t is it's own derivative, not because it has an inverse
@@chri-k I think everyone forgets e is just a constant. Technically the derivative of a^x is still a^x so if we let x = 10^t then the answer would be log base 10 of x. I have not proved this but im pretty sure exponential functions always have the same curve so it does not matter what the log is just that the function is a log. I think people just use e and ln because e is a common constant.
@@jamesarreola3921 this is not true.
The derivative of a^x is a^x * ln(a) and that ln(a) interferes with things.
You can still do this with some other a, but it's way more convenient to use e, because then you don't need to deal with random constant factors
e is not just a constant, it is a *very* special constant. The derivative of a^x cannot be defined without using e at some point.
The correct sentence is "Why an antiderivative of the function x->1/x defined on the set of reals stricty positive is the function ln?". What is your definition for x->ln(x)? Usual definition is ln(x)=integral of 1/t from t=1, to x (x>0). An integral is a number, not a function.
Common mistake among many students (including some of mine!) is to use the wrong word.
Correct word here would be "differentiate," not "derive."
In that graph where is t? It's an X, y graph right ? Why is X= e^t? ?????. Where did that come from ? Just _ because lnx =t means e^t=x. Seems circular. Could you not use any base such as 20^t=x or 15^t=x? ???
In all the years of watching this channel, I feel like this is the first time he didn't answer the question. The first part the "let x = e^t" seems like cheating, the second part / the definition also seems like cheating. By cheating I mean it's just like reverse engineering the answer. I'm not even sure what I was expecting, but I think I would be more satisfied if he said it is a huge coincidence, than proving something using the answer, which, BTW, it is exactly what the question said not to do.
In order to do this more rigorously, you need more than a few minutes. E. g. look up the video "Why don't they teach simple visual logarithms (and hyperbolic trig)?" by Mathologer.
The question asks you not to start the proof by stating d/dx lnx = 1/x, as that's circular reasoning. It said nothing about starting from d/dx e^x = e^x, which is usually what you start with to prove the log derivative.
How do I integrate √(x).
with limits from -5 to -1.
Given:
integral sqrt(x) dx from -5 to -1. I'm assuming that -1 is the upper limit
Rewrite as a power:
sqrt(x) = x^(1/2)
Thus, the integrand is x^(1/2). Use the power rule to boost the exponent by 1, and have the reciprocal of the exponent become the coefficient:
2/3*x^(3/2) + C
The problem is, the given limits have no real output. We therefore will have complex outputs, when evaluating the result.
(-1)^(3/2) = (sqrt(1)*i)^3 = i^3 = -i
(-5)^(3/2) = (sqrt(5)*i)^3 = (i*5*sqrt(5))^3 = -5*sqrt(5)*i
Subtract, to find the result to the definite integral:
i*(5*sqrt(5) - 1)
approx 6.787*i
Note that it was arbitrary which order we do the square rooting, and cubing, of each limit of integration, so an equally valid solution is also i*(1 - 5*sqrt(5)).
To take the derivative is to _DIFFERENTIATE._ Not derive, not derivate, not derivitivate.
What-if, imagine a world back in time when Calculus is known but people do not yet know about euler's number and the natural log
will the integral of 1/x be accepted as "undefined" in such world?
why? No, the integral is just the area under 1/x, there's nothing more to the definition
They might say it doesn't have a closed-form antiderivative, much like they say about e^(2x^2).
I think "undetermined" is the correct term.
@@thefance4708 true, excuse my ignorance
This is a bit of an attack on semantics but I believe “undefined” is only really used when there is no single answer to a problem. When there’s a (potentially infinite) number of solutions to a problem where all are equally valid, there’s no way to define one singular solution or answer and as such its undefined. I believe anyways.
Does the area under the curve of y=1/x, x>0 between x=1 and x=t converge as t approaches 0 (t starting close to 1 and decreasing towards 0)? Or is this divergent?
The area under the curve is exactly the logarithm, and ln0 diverges
Solve the system in natural numbers x+y=uv, u+v=xy
you can simply determine the derivative of lnx with the limit and it will be 1/x. Therefore, the integral of 1/x is lnx+c, no?
Can you solve the integral x^dx - 1
is it possible to take it in reverse and derivate ln(x) ? call f(x)=ln(x) g(x)=e^f(x) = x -> g'(x) = 1 but g'(x) also equal to f'(x) * e^f(x) = f'(x)*x -> f'(x) = 1/x
Err, yes, that's how one usually gets the derivative of ln(x). But the person asking the question wanted to know how to get the integral of 1/x _without_ first knowing the derivative of ln(x).
Can we do it by showing differentiate 1/x+c is lnx?
you make confusion between a derivative function and an antiderivative function.
@@richardheiville937 ohhhh 😅
@@spicca4601You can show how it fills in the gap of the power rule of integration, by starting with 1/(n + 1) * x^(n + 1) + C. Let n equal h - 1, and let C = -1/h. Take the limit as h goes to zero, and you'll see that this produces ln(x), when x is positive.
How would you apply this strategy without knowing what the result will be?
How would you apply substitution when solving integrals? That's a pretty standard part of calculus, the intuition is that 1/e^t is easy to integrate
Another way you could work out what integral 1/x dx is:
Since this is a power function being integrated, why not use the power rule? Let's try, but indirectly. Let h be a slight offset from -1, such that the we integrate x^(h - 1) dx.
Use the power rule to integrate, and use K as the arbitrary constant:
1/(h - 1 + 1) * x^(h - 1 + 1) + K
Simplify:
1/h * x^h + K
Let +K equal -1/h, which will have the cancelling property we desire.
1/h * x^h - 1/h
Group bottom and top:
(x^h - 1)/h
Take the limit as h goes to zero, using L'H's rule:
d/dh x^h - 1 = d/dh [e^(ln(x)*h) - 1] = ln(x)*e^(ln(x)*h)
d/dh h = 1
Bottom derivative relative to h is just a constant, so it's complete.
Evaluate the top derivative at h=0, assuming x isn't zero:
ln(x)*e^(ln(x)*0) =
ln(x)*e^(0)
Result:
ln(x)
And since our previous +K was arbitrary, add on an unrelated constant, of +C
ln(x) + C
doesnt that work proof with any base and could therefore give any log(x) as an answer?
No because the derivative of a^x is not a^x, unless a=e
Is there another method (except L’Hopitals rule) for solving the limit of xlnx as x goes to 0+?
You could probably do it using log inequalities to manufacture some kind of squeeze theorem. I'm not sure if those inequalities require the rates of growth of x and lnx in the first place which is basically just lhopital but, oh well
You could do it by setting t = -ln(x) and solve the limit of -t e^-t = -t/e^t for t going to -infinity. And you can get that limit by considering the power series for e^t.
Can you proof this integral using the definition of Integral by summatories?
That was done historically by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647 to 1649.
can we make it with the definition?!
Can you please solve the integral from 0 to infinity of lnx/xsinx please ?????
Missing parenthesis.
A simpler way:
from definition of ln as inverse of exp we have
x = exp(ln(x))
differentiate both sides
1 = (ln x)’exp(ln(x))
therefore
(ln x)’ = 1/exp(ln(x)) = 1/x
Didn't you watch the video?!? It was stated explicitly right at the beginning that the person asking the question wanted to have an answer which does _not_ use the derivative of ln(x) to get the integral of 1/x.
You can approach it from the other direction and prove the derivative of ln(x) is 1/x. You have an older video on that where you use both the definition of the derivative and implicit differentiation of the exponential function to get the result. By proving it you aren't just saying one is the derivative of the other, which was the concern of the OP.
I'm not sure of your history argument, though. That definition is trivial once you understand the relationship from first calculus principles. And I'm pretty sure logarithms were invented well before calculus. When people started playing with logarithms, they weren't studying areas under curves. They noticed a relationship between a number and its index in a table, and that if you added the indices of two numbers and did a reverse table lookup, the resulting number was the product of the two original numbers. That is, logarithms transform multiplication into addition.
That's the original history of the logarithm definition as far as I know.
"You can approach it from the other direction and prove the derivative of ln(x) is 1/x."
??? It was stated directly at the beginning of the video that that approach was already known, and the person asking the question wanted to know how one could get that integral _without_ knowing the derivative!
"That's the original history of the logarithm definition as far as I know."
What he said in the video was not about the definition of a _general_ logarithm, but specifically the definition of the _natural_ logarithm. And that was indeed defined to be the area under the curve 1/x. (Or to be a bit more precise, the function which gave the area under the curve of 1/(1+x) was called the natural logarithm of ln(1+x); that was stated by Mercator in Philosophical Transactions of the Royal Society, Volume 3, Issue 38 in the year 1668.)
The person asking doesn't understand the definitions of derivative (slope of tangent at x) and integral (area under the curve). Their intro calculus teacher has failed them.
Interesting 🇮🇳 🇮🇳 🇮🇳 🇮🇳
hey bprp, you know how in respect to this video, the integral of (lnx)/x dx is ln^2(x)/2 + c?
well I want to dare you to find the indefinite integral of x/(lnx) dx
(I’ll give you a hint, it’s in the complex world after I checked it on wolframalpha, somehow came up with this at 2 am lol)
Given:
integral x/ln(x) dx
Let u = ln(x), thus du = 1/x dx. Solve for dx:
dx = x*du
Rewrite in the u-world as much as we can:
integral x/u * x du
integral x^2/u
Rewrite x^2 in the u-world:
x^2 = e^(2*u)
Thus, the integral becomes:
integral e^(2*u)/u du
Multiply by 1 in a fancy way, by producing a 2 out in front, and a 2 downstairs with the u:
2*integral e^(2*u)/(2*u) du
Let w = 2*u. Thus, dw = 2*du. Solve for du = dw/2. Thus, the integral becomes:
integral e^w/w dw
This has no elementary antiderivative, but we define the function Ei(x) to be the integral of e^x/x dx. Thus, the solution so far is:
Ei(w) + C
Substitute w = 2*ln(x), and get the result:
Ei(2*ln(x)) + C
Ei = exponential integral.
Another explanation is that int x^(n-1) dx=x^n/n-1/n+C, and as n approaches 0, x^n/n-1/n+C approaches ln x+C.
But what about int x^(n-1) dx = x^n/n + 1/n +C. This goes to infinity + C as n goes to 0.
What you can do is F(x) = int t^(n-1), t=1..x, dt
Hmmm...
I almost wish I was still in university so that I can use ln x as a definition. Might even get away with it using a citation.
I think it was because of their graphs.
It's because the derivative of ln(x) + C is 1/x. QED.
I miss the actual white board... 😢
i've tried with x=-cos(u) and got ln|-x|+c 😂 🤤
Int (1/x) dx => Int (x^-1) dx => (1/0) x^0 + c => (1/0) . 1 + c => universe implode + c => universe implode + the speed of light
Let c = -1/h, and take the limit of 1/h *x^h + c, as h approaches zero. You'll find it converges to ln(x).
nice
I write minus c instead of plus c so there is one less pen stroke 😆
Anybody else do the same?
Hes basically asking for the proof.
"We define ln(x) as..." is NOT AN ANSWER!!!!
WHY!?!?!?!?!?!
Because then you prompt the question, of how do you reconcile that with previous ways that you were introduced to the definitions of base e, and logarithms in general?
Even though historically, the integral of 1/x was discovered first, it rarely is taught in that sequence today. Usually, introducing logs as inverse exponentials, and introducing e as a limit, and as the base of the special case of the exponential with a self-derivative first. For this reason, the onus of proof is to show how these ideas are consistent with the integral of 1/x.
@@carultch You just used a ton of words to say "we made up a symbol" as the answer. NO! That is not an answer.
What is e. What is ln. How do they match? SHOW HOW.
@@hrayzI didn't make up any of those words. Both e and ln have standard definitions that you can look up yourself.
@@carultch not the question. It's not "what's the definition", it is HOW DOES IT WORK.
If I make up a symbol, say it is the answer to some hard question, I don't get a Nobel Prize for that.
@@hrayzWelcome to the world of transcendental numbers and functions. It isn't possible to evaluate these, when just using a finite number of steps, using just whole numbers, arithmetic, powers, and roots. The names of the functions and constants, are just stand-ins for the methods computers do use, to calculate them.
You can build algorithms for these functions, just using whole numbers, arithmetic, powers, and roots, but it requires an infinite number of steps. Examples being infinite series, and continued fractions. You can look up the details fot the infinite series that calculate logs, exponentials, and the number e, if you care to do so. Computers only evaluate a limited extent of these functions, that is good enough for the precision needed for practical purposes.
Integrate : 1/(1-x^20) dx🌚
WolframAlpha can do it for you
Hey, I challenge you to find the solution to the equation
ax^4+bx^3+cx^2+dx+e=0
not possible. If we want to find solutions to 5 variables, we need at least 5 equations. The variables a to d are coefficients of x^4 down to x respectively and e is a constant
Easy. X equals 0. E equals to zero also. A, B, C, D may be whatever you wish. You're welcome 😉
@@exhostosis 10000 iq my man
@@Brid727 I think they want to find the general solution to the equation with x in terms of a b c d and e
Basically the quartic formula
@@user-dh8oi2mk4f if the cubic formula is already absurd then I have no words for what the quartic formula may be