What’s missing, and I’m surprised Sabine didn’t pick up on this, is the relative complexity-increase or “notational compression” that is achieved between addition/subtraction versus multiplication/division versus exponentiation versus entire Taylor series (trigonometry). If those can be graded against one another then I would think that their ranking forms another log-log curve. After all, mathematicians wouldn’t have been compelled to devise a higher order operation if it didn’t provide at least an exponential advantage.
Yeah... I think the point Sabine is making is observational regarding what maths guys persue like what physics guys persue. Although it gives us impressive looking tracks and logs etc. which is nice nuu candy, its stiil a bit like what Pope Francis calls "Decadent Monasticism".
Yes , I have always wondered that if acceleration is the second derivative (or rate of change) of the objects position with respect to time , and acceleration is related to gravity then: What does the 3er derivative of position with respect to time (or the first derivative of acceleration with respect to time) will be able to tell us about gravity or else.
@lewebusl gravity is largely constant within the human scale. The trouble is transients like hitting a bump with your suspension while turning. Or in electrics with microphones and the resultant signals. It's taking an input and converting it to a desired output that sets the engineering mind ablaze. Outputs where the car doesn't spin out or the elimination of hiss of the mic by passing it through a noise filter processor. There is A LOT of math involved.
I find Stephen Wolfram's answer to this, regarding computational irreducibility, the Ruliad, and we being the kind of observer we are, to be quite compelling.
I've also gained loads from Wolfram's insights and have deep respect for the guy. Speaking math, however, we need to be as precise as we can. When Wolfram speaks about "computationally bounded observer" (like us), the expression in that context means actually "statistically bounded observer". Wolfram does very much wonder and even little speculate about what other kinds of observers could experience a slice of Ruliad in other computationally bounded ways. Traditionally that question has been approach by the scientific methods of the shamanic arts. I consider practicing geometric intuition, especially when it requires a bodily self-transformation such as shifting your perspectictive and attention e.g. to that of a flat lander, a shamanic art. The idea of Ruliad offers new language to discuss, contemplate and practice such geometric and computational shapeshifting. Obviously, adopting the perspective of a flat lander is already a major mold breaking in the Rulial space for a being used to 3D perspective. When trying to get a better grasp of relativity, Pure Mathematician Norman Wildberger approached to question by adopting the perspective of a 2D bat relating externally by echolocation. Norman told that for a good while he practiced echolocation by making clicks and trying to find his way around his home the bat way. .
About Power Laws on Physics : It is PROBABLY due to the Weierstrass Theorem that ANY continuous function can be numerically approximated at any arithmetical degree by a DETERMINED Polynomial function with integer powers (if perhaps with non integers powers as in Runge Theorem in Advanced Complex Analysis) ) . If you loose continuity of the Objects, but keep well defined stochastic properties , you can loose significantly Power behaviors and get instead Exponential and power behaviors mixed see : Modern Physics Letters BVol. 17, No. 13n14, pp. 733-741 (2003) Research PapersNo Access On the Statistics of an Ideal Gas with Mass Fluctuations and the Boltzman Ergodic Theorem: The Combined Boltzman-Tsallis Statistics Luiz C. L. Botelho
An interesting one is that energy is typically related to the square of something. And that doesn't seem to be arbitrary. For example, if energy were proportional to any other power of electic and magnetic fields, interference would not be compatible with energy conservation. And the energy being proportional to the square of velocity ensures that the energy is independent of direction. However I have long had the feeling that there's something deeper to the "squareness" of energy.
This is why the quantum relation E = h f , energy proportional to frequency, was challenging for the early theory of QM. If you are familiar with the Schrodinger equation, this E = h f is the reason that it has a first derivative in time, instead of 2nd derivative like a classical wave equation. Obtaining an equation with wave solutions but only a first time derivative is what forced the inclusion of the imaginary unit i in Schrodinger's equation.
When you have a process p that acts on a state x; if process p is at all a function of x (if p must use the value of x in any way), then it must lead to at least a "square" relationship.
@@kylebowles9820 Which can also include an equilateral triangular arrangement: 1 234 56789 10111213141516 The Pythagorean formula works equally well for square areas as for equi-triangular ones!
I‘m not really surprised by this result considering the fact that statements and proofs in definition-less logical systems are known to grow exponentially in length. The exponential distribution of terms is probably just a consequence of us combining concepts and values into definitions (i.e. giving them names, instead of writing them out in most basic terms over and over again) and probably holds for (almost) all mathematical texts, not just those dealing with physics.
you mean by logical calculus (application of the rules on first principles / axioms )? Could you explain in more detail? Do you have a good reference? Thanks!
I think that's an interesting insight about defining combinations of concepts into another concept leading to the exponential distribution in mathematics. For example, one can see that the need to use many addition's is saved by defining the concept of multiplication and the need to use many multiplications is saved by defining the concept of exponentiation. However, natural languages also define a word to designate a concept that consists of other words designating other concepts that combine to form the meaning of that concept. So, why does the distribution for natural language words usage differ from exponential?
@@luminiferous1960 I like your thinking & the question you pose. I cannot define what I feel, but something nudges me to think of it as an evolutionary survival strategy innate in humans. If natural languages were as laser-focused & precise as mathematical languages, would there be less diversity and a higher chance that events could do more damage to a population? Natural languages being subject to interpretation, people will differentiate and more scattering happens, reducing the chance of catastrophy decimating *all* of the now-separate groups... ?
@@phononify I read about it in the book “Type Theory and Formal Proof”. There is a question on stackexchange discussing the passage in the book which I cannot link here but you can find it by searching for "definition-less mathematics". The comments and answers there might guide you to original sources.
@@luminiferous1960 I think the reason is a combination of the facts that natural languages feature both redundancies and ambiguities and that they were created bottom-up by an evolutionary process instead of top-down guided by the idea to have parsimonious and consistent languages to discuss all of logic/mathematics.
Concerning the lack of third derivatives, I believe that the Ostrogradsky instability provides the beginning of an answer : in analytical mecanics, the equations of motion with third derivatives leads to non inferiorly bounded hamiltonian, which means instabilities because the system tends to be in the lower energy state (an energy of -infinity in this case).
Kind of morally similar to why gravity and EM follow an inverse square law: Othewise orbits would be unstable, and so we wouldn't be here asking about it
Engineers have a name for the third derivative with respect to time. In German they call it "Ruck", in English Wikipedia says it's called "jerk" or "jolt". They say that too much "jerk" can destroy certain kinds of machines and must be limited.
@@amigalemmingnot sure about machinery but imagine escalator starting with a jerk. Absolute force may be low but applied without giving passengers any chance to prepare and you have recipe for disaster.
Your "two time derivatives" comment at 5:00 brings to mind a comment in C. Hartley Rogers book "Theory of Recursive Functions and Effective Computability" - "Almost all statements which (I) have been extensively studied by mathematicians and (ii) are known to be arithmetically expressible can be seen, from a relatively superficial examination, to have quite low level in the [arithmetical hierarchy]. As has been occasionally remoarked, the human mind seems limited in its ability to understand and visualize beyond four or five alternations of quantifier." So Rogers is pointing to limitations in the ability of humans to conceptualize nature as the reason for some of the regularity.
@@Pure_Science_and_Technology It's not about "grasping". It is about what can be done with those laws. We stick to the things that can be computed for practical purposes.
Sounds more like a sort of p hacking to me. Constants is one category, but variables are split up (x gets its own). Square is one category, but so is exp (which also includes ³ etc I assume). There are also additional options to change the categories (e.g. make numbers to digits) --> Changing the categories a couple of times and you will find something which fits a neat curve.
"but so is exp (which also includes ³ etc I assume)" I'm neither a physicist nor a mathematician, but I certainly can't rewrite x^3 as something involving exp(x); not without canceling it out that is. log(exp(x))^3 works for real numbers. That said, "Constants is one category, but variables are split up" does look fishy.
@@gernottiefenbrunner172 the initial poster meant x^3 = exp(3 ln(x)), I think; (and in general x^a = exp(a ln(x)); this is actually a usual definition of x^a for irrational a's!) You don't really need powers if you allow exps and logs.
@@lism6 Wow, that's actually simple enough to make it embarrassing I couldn't do it, even though I didn't do much math after I failed at school. Especially since I do recognize this representation from the formula for the derivative of a^x
I assumed they'd be using exp as a general operator for ^; Independent of whether it's e^x, x^3 or 2^x; I now skimmed through the paper and it seems like they are using exp for e^sth, have defined ^2, ^3 and ^4 as special oeprators and all other sth^sth are considered pow. So I guess my assumption was wrong, but the reality is not less arbitrary :/ They are also excluding any functions which include integral or differential operators without giving any reason why... They also defined +, - and neg, I guess you could get rid of either neg or -.
Some people are freaked out by deep space. Some people are freaked out by clowns. I'm freaked out by the idea that our brains are formed in such a way - that some of our observations are just as likely to be a reflection of our own process patterns as they are to be objective truths. I think sometimes we're trying to study the hardware of the universe, but unknowingly study the software between our ears.
The only problem is to find out which is which... or if it doesn't matter. Like with the anthropic principle. Loved the example with a pond. Where pond states, that the earth beneath was made perfectly, just to fit it's shape 😊
Another way to put it is that in order to fit things in the limited storage between our ears, we create a lot of unintended compression artifacts, which we then fail to identify as such.
I would call this the measurement error or bias in the instrument that we use to determine all of this; i.e., the brain. It's like the statement, 'Any portrait is a self-portrait.' That doesn't change the fact that it is a portrait.
Easy explanation: Natural languages have low entropy while the language of mathematics has high entropy. Natural languages contain a lot of redundancy and are very tolerant of a wide variation in pronunciation and spelling. They could be compressed significantly without losing any information. In contrast, the language of mathematics is already highly compressed and almost not at all error-tolerant. A symbol added or changed gives a completely different meaning. This difference of course affect the frequencies of used symbols or tokens. It would be interesting doing the same study with the language of biology, DNA. As far as I know, this language also has a lot of redundancies, which makes DNA error-prone to a certain extent.
On the surface, this would look like an interesting deviation from a known pattern. However, given that they only used a very specific subset of equations, I wonder if this is a case of sample bias.
They aren't equations they are operations, equations are built using operations and variables. these are the most common building blocks of equations that are shown
@@NemisCassander lectures from a notable physicist, a list of equations named after people and a body of written equations? No they are very similar samples and none of them are direct measures of nature or natural systems. It's the answer you get to the question "How can I publish my paper without doing any fieldwork?"
If math is self-referential (it is), then everything can be derived from one or two numbers (relations), which is the definition of Zipf's law for distributions. Listen closely: everything grows exponentially from the speed of growth, OK? How is this finding any surprising then? STRUCTURE is what's more important (and I have Some)
I have a problem with the graph presented at 3:23: there are only 12 values. Any ranking will exhibit some kind of decrease, but this only becomes interesting if there is a huge dynamic range, like there is in the ranking of word order, or, indeed, ANY quantity that exhibits a huge range of values, like city populations, sizes of lakes, etc. Those rankings also obey Zipf's law.
So? Lots of systems that you can regard as language have much fewer "words" than English. Programming languages for instance. And those surely comply to Zipf's law.
@@perpetualrabbit The distinction I'm drawing is between lists that have a huge number of elements, like a spoken language, and the short list of 12 elements in the graph. That list does not have enough of a dynamic range to admit of a meaningful scaling law. By dynamic range, I mean the range of frequencies of occurrence of the elements. For English, this ranges from about 2.4% for "and" to at least as low as about 5 x 10^-8 for "zoroaster" - in other words, a range of about 10^7 or higher. The dynamic range for the symbols in the graph is only about 30. However, it must be said that there are a lot more symbols used in equations than the 12 in the graph. I'd like to see the whole set.
There have been several experiments where artificial intelligence (AI) systems were tasked with discovering the underlying laws of physical systems without any prior knowledge or predefined equations. The results were surprising: the AI identified entirely new variables and equations that differ significantly from traditional human approaches. This suggests that there may be multiple ways to understand and describe the laws of physics. It seems that the symmetry discussed in the video may stem from the fact that both the equations and the language we use are human constructs, shaped by subconscious patterns such as efficiency, beauty, and golden ratio.
There are generally infinitely many different ways to mathematically formulate the exact same theory. This is why we have so many interpretations of quantum mechanics. It is curious, if you think about it, that physicists do not have multiple interpretations of other theories. (Or to the extent that they do, they didn't spread very far.)
@@SabineHossenfelder It might be interesting to submit these reformulations, and perhaps a random set of other possible reformulations in order to remove the human element as much as possible, and then see if the exponential law still applies.
In the ability of calculating, humans are in the lowest third of all species! That there are people believing in Einsteins bullshit and trigonometry is just using tables, is a proof (logical=mathematical).
@ isn’t it embarrassing to ask for “Allgemeinwissen”? You could watch almost every documentary about animal-skills. Humans are in competition with ants if it comes to calculating things. We are just doing abstract nonsense like LLM
I don’t know if it’s fair to call this a power law of power laws, but I might be thinking about this the wrong way. But both Zipf’s law and this one are dependent on representation, as you mentioned. That means they’re more “laws” in the sense that they describe human behavior rather than natural laws. It feels a little bit like a conditionally convergent series in that way, where the same series arranged differently can be made to sum to anything, whereas a natural law feels like it should more elementary and essential, like an absolutely convergent series where changing the order of the terms doesn’t affect the sum. That’s admittedly a pretty hazy analogy, but still I think it’s apt.
We can see that the more familiar operations are more commonly used than the less common inverse operations: multiplication vs. division, addition vs. subtraction, square vs. square root. And squares are used to replace multiplication, anyway. This is likely to be due to human preference. The former are easier for us to use, so we often express things using these operations. Also, we tend to factorise to make expressions use these operations: A - B - C - D will likely be written as A - (B + C + D) for convenience, and A÷B÷C÷D --> A÷(B×C×D). I think that largely explains the relative prevalence of inverse operations. Regarding the different types of operations, it probably depends on how we think of the world. If we consider the world in terms of waves, we'd see a lot of sin, cos, and exponential functions. Otherwise, sin and cos mostly appear for angular relationships, and exponential functions would be largely from statistical mechanics. Also, how many addition operations are counted in an infinite sum? Or for the total contribution of a large number of particles (total force applied by a mole of atoms)? I'm guessing it's the handful that are written for us to get the idea.
I suspect the reason why we don't see third derivatives and higher is due to Fourier's Principle. Every smooth curve can be decomposed into a series of sine and cosine functions, and repeatedly differentiating these functions will get you back where you started, up to a scale factor. I also think that this could be used to simplify a lot of complicated mathematics used in physics. For instance; starting from Riemann's curvature tensor and working out it's derivatives repeatedly until you get the scale factor, then equating this scale factor to one, might show a useful constraint on the value of the cosmological constant. Tho I don't yet have the mathematical skill to work this out fully myself.
Fourier series are periodic, which limits their application (e.g. in General Relativity the solution will exist for all t > 0 and not be periodic in general). Regarding your idea of a scale factor, sure for f(x) = sin(3x) you could say the scale factor is the constant in f''(x) = -9 f(x), but suppose you have g(x) = sin(x) + sin(3x) , what is the scale factor now?
@@iyziejane Those are very good points. For the first, the big bang, and possibly other bounce events would be cyclic. For the second, the scale factor in general cases gets complicated. I think it might be something like a power series or Taylor series. This is why I admit not having the maths chops to work it out myself. I think tho that such complex finctions could be simplified in the way that Tayor series often simplify to easier to manage functions like exponentials and the trig funcions themselves.
@@eddie5484 Thanks for the reply, I'm a physics professor and I enjoy considering student / novice ideas. Fourier analysis occupies plays a large role in quantum mechanics. You may enjoy reading about a generalization of Fourier series called "Sturm-Liouville theory", which shows that 2nd order linear differential equations of a certain type give rise to families of functions that can represent any function in the sense of Fourier series, y'' = -y and the trig functions are one example, Legendre polynomials are another, and most special functions in physics come from this construction. I also suggest to read about chaos theory, with nonlinear differential equations, which is where the fourier technique strongly breaks down.
@@iyziejaneYou are not a physics professor otherwise you would know that GR is mathematical nonsense. The fact that you don't understand this proves mankind still hasn't evolved past the caveman/flat earth/stationary frame view of the universe. Dont you think it odd that after a century worth of experimentation, GR and SR have failed to be validated. Didn't Einstein himself say that the laws of physics are equally applicable in ALL frames of reference. How do you apply both GR and SR to the same frame. The fact that you can't get QM to align with GR, or GR to align with anything for that matter, should clue you in on the fact it's BS.
@@stewiesaidthat In fact I basically agree with you that GR introduces too much nonsense, that the word salad about dynamical spacetime holds back progress in fundamental physics, that the higher order nonlinear terms of GR have never been and probably will never be verified, that it clashes terribly with quantum mechanics, which was the real physics revolution of the 20th century. That the main attachment to GR is a sociological one, because it is central to Einstein's hero story, and for that one man's ego and the culture he represents, physics has to suffer for 100 years. Is that good enough? I'm probably in the 0.1% of legitimate theoretical physicists who agree with you this much. PS Einstein plagiarized everything all through his career, not just the well known examples of relativity.
Excellent phenomenological point 0:43 - that ideas are a form of "reality." There is "real-reality" (physics, chemistry, etc.), social reality (our cultures), personal reality (our beliefs), virtual realities and sensoriums, etc. There are natural and what one might call "artificial" sciences and associated technologies, both of which can be studied, replicated, theorized about, and evaluated using mathematical and statistical and even philosophical methods and tools. Examples of the latter might well include architecture, jurisprudence, history, maybe non-Euclidean geometries, things that might not be studied if not for human curiosity, interest, and imagination (e.g., aviation), etc. Science fiction has a habit of turning into real factors in our worlds.
The underlying law may simply be a product of the stability of the phenomenon being described. Stable things appear more prolific. Prolific things are more likely to be investigated and described. Described things appear more important, and those are the ones that appear on the Graph.
@@bartsanders1553That might be simply related to word length, as shorter words are likely to be used more frequently, as well as easy to learn and pronounce syllables like Maa and Paa.
@@picksalot1 It actually spreads out more than random selection. Single letter words ought to rank higher in general than multiple letter words, but it turns out to be not the case.
Last video i commented about the need for the scientific community to do an analysis of the process of science. A second effort i think we need is to build a cohesive encyclopedia of established knowledge. The question yesterday was "how can we refine the process of science to do better?" The question under this exercise is: what cross-disciplinary facts are we missing that point at the next layer underneath our broad understanding of reality?
Intriguing. Thinking of math as a language, couldn't it evolve into something that literally cannot expose or solve certain problems? I could see where alien math might be able to solve problems that human math can't and vice versa. There are some languages on earth where it is very difficult to express certain concepts, while in other languages those concepts are relatively easier to communicate.
Conceptually, math isn’t a language. There are mathematical concepts you can use for communication and even consider them languages, but that doesn’t make math itself a language the same way colors or resonances aren’t a language.
@@johnrichardson7629 IDK, I disagree that math isn't a language. For decades (centuries?) in Universities, Math was part of the faculty of arts and not science. It was considered a language. When I studied Engineering, it was well drilled into our heads that mathematics was the language that engineering is taught in so the complete immersion in mathematics (rather then engineering concepts) the first few years was required first, then engineering principals came after you could speak the language.
@wally7856 None of that makes it a language. We make mathematical statements that refer to mathematical objects and their properties and relations just like we make natural language statements to refer to things and their properties and relations. Surely you recall being taught the difference between numbers, ie the things referred to, vs numerals, ie the symbols we use to refer to them.
Some of the simplicity comes from the structure of spacetime, it has 3 dimensions on macroscopic scales, and it's precisely flat, except for subatomic and cosmological scales. This is reflected in the laws of physics, which then preserved through all the emerging higher layers of reality. The reason for why spacetime is like that is mostly explained by inflation and the anthropic principle. Btw. if you want proof for the inflation model, just check the prices at your local supermarket. The other reason is that we use approximations, linear mostly, which hides the ugly parts in most cases. And if this fails, we just through up our hands, and use numerical models instead. Those are the same simple approximations, just used for each small bit of the problem, then aggregated.
Many real-world relationships are approximately linear, in isolation, and we tend to look at things deconstructively rather than holistically to better understand them. By holding some things constant even when they aren't, we gain insights into the components of complex systems. Describing the whole system in a single equation isn't impossible, but it is very hard to fully capture complex dynamics in that way.
Thanks for that insight. I would add that when taking the first steps beyond linear approximations to understand the complex nonlinear relationships, scientists most often start with expanding a nonlinear relationship into a power series approximation so that the contributions from each higher order term can be determined. This is also a reductionist (or as you phrased it, deconstructive) rather than holistic approach to understanding complex nonlinear relations, but it has been very successful in modelling complex nonlinear dynamics in many scientific and engineering disciplines when the contributions from higher order terms diminishes rapidly with increasing order.
In defence of the decomposition approach, if you take the view that the most fundamental things are small, then it makes sense to start with the small and build up.
@luminiferous1960 yes reductionist was in fact the term that was evading my memory at the time. And I agree with the tendency to work around limitations of existing solutions to extend them, rather than looking for a more fundamental correction of early approximations. It's a huge problem in economics.
@audiodead7302 "Those who would destroy a thing in order to understand it have left the path of wisdom" The context is Gandalf admonishing Saruman who put on a new magical, opal-like vestment and rejected the color white, which once represented him. He revealed himself finally as Saruman of Many Colors. He wished to become a power like Sauron and become himself a ring-lord and ring maker.
Chemical engineering took a major advance when an Indian found many chemical behaviors follow cubic functions (s-curves), which I doubt are in WikiKeepItSimpleStupid.
Humans evolved in this universe through the interplay of its physical laws. Therefore it isn't surprising that we should create maths which also reflects those laws. If you sow beans then you'll get bean plants, not pineapple trees! I hope this makes sense... We arise as part of the universe and our activities produce results which are also part of, and so in accordance with, the universe.
From an orderly early universe to a more chaotic disorganised one. Entropy wins in the end. I think these things are a symptoms of decay. Nearly inevitable.
This doesn't make sense. What's the alternative? That's what you're trying to say. That there is an alternative and it is impossible. What is it? That we get transplanted here from another universe, and then... Then what, instead, exactly?
You may have that the wrong way around. The universe *as we understand it* reflects the mathematics we use to create it as a structured entity out of mere sensation. Read yer Kant....
Dear Sabine, Thank you for brining much needed critical insight along with unparalleled humor to subjects I love but have fallen away from due to absolute boredom or unintelligible noise of popular discourse and misleading news sources. I am an artist and developer. I love science (especially physics.) Without humor, I don't think the journey toward knowledge/discovery in any field is worth the candle. Thank you for bringing a brilliant level of humanity and artful discourse into my almost daily mourning routine.
Could you please make a video about measurement? When you talk about the problem of measurement in science, do you mean a quantum observation event, the physical instruments that gathers data, or combining data into a metric space?
Sabine , probably (I SUPPOSE!) the base of this regularity in Mathematical Physics is rooted in our GEOMETRICAL EUCLIDEAN idealizations of the Physical World (EVEN WITH RIEMANNIAN STRUCTURE AS SUPPOSED IN GENERAL RELATIVITY or usual Strings and quantum field theory ) .For instance , in DIRECT PROBLEMS Classical mechanics , point particle trajectories are "perfect" geometrical curves in the ambient space (smooth and differentiable) . And there is a theorem very deep in Differential Geometry called the Fundamental theorem of curves that say that is enough to characterize curves COMPLETELY by using at most its second derivative ON CERTAIN VECTORIAL CURVE'S OBJECTS (The normalized tangent vector -its "velocity". Its normal and binormal -its "vectorial acceleration" in R3 -THE SECOND NEWTON LAW SAYS THAT THE FORCE IS A VERY SPECIAL KIND OF VECTOR ; ITS IS ALWAYS IN THE OSCULLATING TRAJECTORY PLANE .NO COMPONENT ALONG THE PARTICLE'S TRAJECTORY BINORMAL!!).) ! . The same mathematical behavior of second derivatives objects for surfaces, three dimensional volumes , in General Euclidean Manifolds, etc.. . So it could be of no surprise or not natural that Newton-s Law for motion only uses second VECTORIAL derivative for determine particle trajectories .By the way , Kinetic energies only involve first vectorial derivatives . Kinetic Energy is always quadratic in first derivatives (The minimum principle lead us to second derivative for motion equations ). Higher order equations could occurs in Classical mechanics always as 'Dissipative" laws -Friction . Of course if you have "Fractal trajectories" like particles moving in a turbulent fluid which have only its position as a meaningful geometrical object (as a Newtonian Particle) and not a well defined point wise Newtonian velocity or acceleration as a non random motion , you must use now an object called Stochastic Ito derivative which has meaning only for the "first" derivative!!(It is a sort of a kind of ...integral!) . And about Higher derivatives in Mathematical Physics ? .They exists , but they are very! , very ill behaviored for usual Physics !. In Inverse Problems they proliferate . In Geophysics , the main problem on seismic prospecting are the famous Ill conditioned mathematical problems (lost of inversibility and continuity on the underlying Math ) .In my time as a Mathematical Geophysicist , I was studying Random Transforms for inverse seismic imaging problem which had the promisse to be less sensitive to mathematical ill possednes ! .Totally different framework from usual Theoretical Wave Physics ! . See the following (old!) venerable Book on ill posed problems - "Solutions of ill-posed problems" , Andrei Nikolaevich Tikhonov and V Arsenin . -1977 . & Feynman path-integral representation for scalar-wave propagation Luiz C. L. Botelho and Ricardo Vilhena Phys. Rev. E 49, R1003(R) - Published 1 February 1994
Great video, Prof Hossenfelder!...but I wonder if we might be making a bit of a leap by suggesting that this statistical pattern in physics equations is a "Law of Natural Laws"--as stated in your video title and "Meta-Law of Nature" in the original paper. The exponential decay pattern the authors found seems to reveal more about the structure of mathematical equations in physics rather than about nature itself. It’s fascinating and valuable for symbolic regression and other computational methods, for sure-but it’s also possible that these patterns result from human conventions and preferences in equation-building, rather than any fundamental aspect of nature. We naturally simplify equations with integers, familiar functions, and constants. So maybe this meta-law tells us more about the language we’ve created to describe nature than about the natural laws themselves. Perhap, "Scientists Discover a Pattern in Physics Equations-Could It Reveal a Meta-Law?"...might be a more accurate title?
this law about word frequency i accidentally discovered it while creating a chatbot from scratch in luau. I use the inverse sigmoid to determine word weight along with emotional tonality, and entire context weighting. Then I activate the result with log and it works amazingly
Non-uniform distribution of language elements is the basis for encryption/decryption and Huffman compression. The idea has been around for quite some time (centuries maybe?).
One of the most technically important areas of engineering involves varying fractional powers - adiabatic expansion and compression of real gases. It's a consequence of the different degrees of freedom of different molecules, along with different gas mixtures and dissociation.
That's funny. I remember studying physics and engineering dynamics, wondering if there were any processes that needed third or higher derivatives to describe said processses. Found myself so busy studying and working that I never found time to contemplate those sorts of things any further and forgot all about it.
Higher derivatives are used in mechanical engineering when dealing with reciprocating machines, cams, etc. The third time derivative of position is called "jerk".
Pure math is about patterns and relationships in the way you count, and account, for something (and even where nothing can be something). The question is whether those patterns and relationships map in a useful way to what we can see and hear and smell and touch and taste and in any way experience.
It is nice to see natural laws being described by "natural" maths. It is always weird when you read that someone proved the existence of 15 dimensions or something. I mean did you prove it, or did you have some crazy theory and you needed to invent 15 dimensions to make the math work?
The additional dimensions are needed for string theory (i.e. to align any kind of prediction from that theory with reality). But string theory is very speculative. So you can safely assume that there are no additional dimensions beyond those that we observe (i.e. 3 space, 1 time dimension).
Young lady, you seem to pick the most fascinating aspects of physics and the related technology and boil it down to a succinct and clear presentation. It's fascinating how you follow the links of these concepts out into seemingly unrelated fields like language.
For a relation that might reveal deeper physical truths, many of the categories should be amalgamated. In particular the *** and *÷* operators should combine, the powers and roots should combine, logs and exponents should combine. *X* should be amalgamated with the variables, not have its own category. All trigonometric operators should be combined. Initially I thought the same of *+* and - but provided equations are reasonably simplified (e.g. write - and not +- ), I no longer do. What I'm saying with this is that operations that are inverses should be amalgamated because at the lowest level they are the same type. Expressions can often be rewritten to use one or the other. Now I'm neither mathematician nor physicist so I could be slightly off in some details, but I am more interested in the origins of physical principles and their relation with mathematics than I am interested in human tendencies in this context.
4:55 Ostrogradsky Instability is a possible explanation, as having 3rd derivatives (or higher) of position with respect to time in a theory of classical mechanics causes a Hamiltonian that is unbounded from below.
For over a decade, I've been telling people about how Zipf's law explains many things, from matter distribution in the universe to economic inequalities. Nice to see it's finally being explored for real.
@@therealnotanerd_account2 Yes. They are almost identical. Zipf discovered the distribution by rank ordering word usage, and Paretto found it in, well, everything. The 80-20 principle is more application than observation, but follows the same idea.
I mean, for example so many force laws (& we have lots of them, so you would expect whole number exponents to be overrepresented) are inverse square because that is the nature of 3D space... to then read is as "ooh isn't it weird the exponents of things are almost always whole numbers" isn't telling us something we don't already know. Whole numbers exponents often tell us about the dimensionality of things. I'm willing to contemplate that there may be something deeper, but I would definitely caution about reading too much into it. Fractional exponents are typically fractal things, non-standard dimensionality, in some senses infinite... typically not corresponding to measurable quantities.
0:09 is weirdly similar to the tree of the Sefirot. It just needs to show the root ball for Malkhut. Maybe those old rabbis were mathematicians too. :)
The Ancients, across many thousands of years, understood the universe far better than we do. We quantify and qualify, rather than interpret and integrate (with our lives). We 'understand chemistry' but ignore that our bodies are a collective of electrochemkcal reactions, a coherence between billions of atoms acting in concert through DNA harmonics (2 Golden Ratio spirals linked by primordial hydrogen bonds). We are star children in every meaning of the term.
They were. Isn't it Malkuth? There is some wonderful numerology in the Kabala. The quality of zero, the quality of one. The quality of two and so on. Sort of predates topology and dimensional analysis.
I think the underlying structure is not only how we simplify, but also how we came up with notations in the first place and good the same expressions just get a simpler notation, if used often enough.
You probably already know this, but Ostrogradsky instability is potentially a partial explanation for why we usually don't have more than two time derivatives. Although, I think there are loopholes, so it's not a complete explanation.
The reason integers predominate is simple. If you look at a small sample of any object the count of objects will be an integer. Lets say count the number of rocks on a hillside and the number of visible rocks is 112, so the answer is 112 rocks. But this is very simplistic as each rock is different in weight, size, and location. It is the simplification of the classification process and the simplified description that delivers the integer 112. I am sure many of the rocks were chipped or eroded severely but nobody is going into detail in saying the number of rocks is 67.7 when all of this taken into account as many of the rocks might be only one half of the rock or only 3/4. Half of a grain of sand is still a grain of sand.
A transistor is a fundamental electronic component that even the edifice of AI today stands on. Not that 4 terminal components do not exist but all the critical ones are till 3.
there's nothing surprising about it. anyone who understands the following laws and relationships knows exactly what it's about: central limit theorem (normal distribution), Benford's law, Shannon's entropy, scale invariant power laws (which also lead to self-referentiality and fractals), Pareto's principle, and so on. fundamentally, this is a matter of probabilistic and statistical laws, which has a mathematical basis, but since mathematics is universal (which is self-evident), this regularity is also universal. that is, the distribution of the first digits of numbers, the frequency distribution of letters in the alphabet (entropy), processes described by Pareto's principle (e.g. the proportion of marketing department spending that is meaningfully used), the distribution of fish sizes in the oceans, wealth distribution in economically under-regulated societies, the frequency of asteroids by size in space, and so on. why would it be any different in the case of scientific texts? ps: as regards the two examples mentioned by Sabine, power laws in nature usually have integer exponent because our reality is (or at least it seems to be) three dimensional. e.g. the inverse square laws are due to this fact, since energy is spread out in a spherically symmetric way (at a given moment energy is distributed on a two dimensional spherical surface). of course, it doesn't mean that fractional dimensions and exponents cannot be more accurate, who knows? but e.g. 3/2 is also a pretty common exponent in physics (due to the combination of the three dimensional space and the Pythagorean theorem). and as regards the second example (why third temporal derivatives are not relevant): it has both natural and artificial reasons. the artificial reason is that we could use third order derivatives e.g. for position, but velocity (first order) and acceleration (second order) are the most important ones, so we usually neglect the third and higher orders. but e.g. "jerk" in kinematics is the third derivative of position, i.e. the acceleration of acceleration. in reality, almost every motion has non-zero third or higher order terms, we just ignore them. but why are we interested in only the first two? well, this is the natural reason: because change (velocity) and change of change (acceleration) are the most important terms. the former is always relative, meanwhile the latter is always absolute. self-referentialisation (latter) always leads to absoluteness. this is why acceleration has such a super distinguished role in physics (e.g. in dynamics, in relativity and so on).
as soon as I see "there's nothing surprising about it [PERIOD]" followed by an essay I'm scrolling. There obviously is an explanation and since you need a whole book to explain it it's obviously not that simple
I buy into this to an extent. To extent that self-referential may just imply the observations, analysis, conclusions, research publication and so forth mostly take place in human thinkspace. BUT! There are two considerations on that. Has evolution of human form also been influenced by the universe and all that is in it?
If the probability of everything occurring is a bell curve, all their deviations from the equations describing it are errors and the equations will center around describing bell curves.
@@danibarack552 being not surprising and being simple are two different things. this article is supposed to be about something that is novel and thus surprising. and no, my whole comment is not an explanation, I just mentioned many examples which are similar to this Zipf's law.
[because our reality is (or at least it seems to be) three dimensional] - Kant would probably say: We are thinking (naturally) threedimensional ... at the end we break everything down finally to these dimensions. Intresting comment by the way. Need to think about it.
Thanks Sabine, this is very interesting. > 1:53 I wrote up some routines way back to search out frequencies of letters, n-letter groups and words, but never thought to graph/plot the frequencies. Time maybe for a refresh with better routines and some UI outputs :P > 3:43 To reverse that, does it tell us something about the way "we" create the mathematics? Do we instinctively follow natural rules when we create systems of explanation and communication? (does this then emerge in our language expressions where we see that filtering through.) > 4:08 You really have sparked my curiosity. Most modern code that we use is based upon human language principles (That was the concept of BASIC and other early languages. Beginner's All-purpose Symbolic Instruction Code, meaning that it follows natural language constructs so that is a more human like form of communication with the hardware.) . I would be curious if these patterns exist in the lower level CPU based Assembly or hexadecimal, or are only present in the symbolic code language. > 5:20 Personally I think the Universe is trolling us :) *All that I can think and say is that a 0.0...1% difference can change everything and we have no way to deal with that in the current math. Sometimes even the smallest things in the universe are not necessarily measurement noise or error to be rounded out :)*
Considering the letters come from the brain,where thoughts are generated by electrical impulse function we can therefore assume the shape of the letters have also a relation as they are a result of.
Mathematics is not used to describe observations but to describe interactions. We only “know” reality through our interactions with reality. Mathematics is a product of adaptation of the community of neurons that is our brain. This is a self organized system where reason, logic and math, emerges as an optimization process (optimization of our actions on reality). There is consequently an image of ourselves in our image of reality built from physics. In other words you cannot separate reality from who we are and it is an error (metaphysics) to believe the image is reality.
i have a vague memory that Noether's Theorem is a kind of law of laws. Wikipedia says ... Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. .. except (pouty face), i don't understand what that means.
Even the lorentz transformations(and SR) could be a law of laws in that sense. This is a reflection of Linear Algebra being the algebra of algebras.But, a true law of laws, i think, should be connecting diverse and non-analogous laws.
I like the contrast of taking all these formulas and equations that are upheld as elegant gems, and throwing them all into a big cauldron to be boiled down to a histogram of operation occurrence
Really interesting that subtract was ranked above add! Really shows us something about how we write equations. E.g. we're much more likely to write F = ma than F - ma = 0. And if things get complicated with lots of terms, we're likely to define new symbols for the sums of many terms, rather than keep writing them out in full.
The stability of the electron and proton are a clue as well. How well-constrained is the Hamiltonian of the Universe that describes all interactions? Maybe the path forward for physics is to elucidate the poorly constrained portions?
That´s great and surprising content, not surprising to find it on this channel first. It shows in my limited insight, that researchers should look over their own horizon more often and think outside their box. So far, linguists like Chomsky searched for an underlying theory of grammar their whole lifes, and the question, if God is a mathematician is asked since hundreds of years by philosophers and physicists now. We just begin to understand, what´s behind it. Still great times for science, no chance for science deniers.
I am not sure "science deniers" is the concern. I would say very few people deny that science is what has allowed their car to be developed to the point where they can reliably get to work. Motor vehicle owners who are skeptical on what the impacts of using said car are not the problem. That falls back on a lack of trust in science, and as time goes on it becomes apparent that lack of trust is well deserved in some instances. If you want people on board with your science, do good science and let the facts speak for themselves. People are not "dumb." The average person has average intelligence. If you can't make him understand important science, then you haven't tried hard enough or have taken the wrong approach.
@@msromike123I agree, one addition: I sometimes use exactly your car argument in talks with "sience deniers", that science is the basis of the technology we use and that the fact, that it´s helpful and works, is a hint that science tells us some true aspects of reality. Interesting enough, this argument is quite strong and convincing. Many people who are sceptic about science don´t even think about this connection that is so obvious. It´s not a lack of intelligence, but of the perceptions of context.
Super interesting. Some things are probably just more useful than others. Knowledge and skills likely start with the most useful and then become more and more specialized to rarer and rarer cases/uses. So efficiency and usefulness. We say Mom first and for about a year it is our most common word but after a few years, it is greatly surpassed by "I", etc. We add first and probably most frequently and square root a lot less.
We don't have more than two time derivatives, because you can typically re-write higher order time derivatives as an interaction of fields with lower order time derivatives.
I'd actually second the proposal that systems are key to reducing number of time derivatives. There is only the first order derivative in qualitative theory of ODEs, there is only the first derivative in state space representation in linear system control theory.
Very interesting to know about zipfs law! I believe the freq of subexpressions has to do with humans. And probably not going more than acceleration is due that they can explain many phenomena up to this derivative
We already knew this. It's why gravity is pursued using field theories. They *worked* already. We could easily feed data into a neural net and spit out equations that would look like better explanations for the data, though you'd be hard off trying to decipher a model.
Couldn't comment without payment? Not that this isn't worth it but strange. This sort of talk I enjoy the most. I suspect there is gold among the manushia. I imagine the pink shirt being auctioned in the future. Thank you Doctor
The law of all laws is causality. All laws are causality in different forms, physical and non-physical. Physics is limited to the physical properties and processes. That's why it cannot see the full picture. Most physicists don't think outside the box. If there's no empirical data, it's not real, they say.
A few days ago you commented on Wolfram. This is supportive of Wolfram's thinking; that there's a single fundamental regularity and that we can find it. He's gone off to search for using a particular approach; he might be totally on the wrong track; and as always he totally fails to acknowledge the contributions of anyone else--either anyone who has ever hit on similar ideas before or even his own co-researchers. But I agree with your analysis from a few days ago: it's nice to see someone trying to get at this presumed fundamental regularity directly.
Math is a language, we have fractional exponents everywhere particularly in second order phase transitions, the flux of cosmic rays follows power laws with real exponents, it's also very similar to zipf law itself, we have higher than second derivatives in physics, higher derivative gravity for example, and besides that, many physical quantity are analytic function, so that means they are defined in terms of their Taylor series that contains infinite derivative. About the paper, the sample is clearly too small and the choice that they made can bias the result but it's definitely worth exploring more, and repeat the study for c++ , python, and other programing languages, Math is a language but not a traditional one so it would make sense that the law is different from Zipf
"Math is a language, ..." ... the *flux* of cosmic rays ..." " ... many physical quantity are analytic function ..." " ... Taylor series that contains infinite derivative. ..." You have my attention :P (Especially the flux part) . I am someone that is leaning towards what I would describe as "An Analog Universe". Digital based math can't describe 100% accurately because it is a system based upon infinities (in a constant state of flux). ADC breaks the true beauty of analog. I have been looking for a way to "emulate" that analog problem on a digital system. I have looked at a number of number systems that can approximate an infinite value, so I was wondering if the Taylor series has any potential in that?
Well that was definitely thought-provoking, I wish the video was longer. I miss your longer videos. Unfortunately on this topic there probably isn't much more to say except its a thought provoking provoking mystery.
Most of math is just the count function iterated. So adding is just iterated counting, multiplying is iterated addition, exponentiation is iterated multiplication. Also these have an inverse, which is harder to do.
Math is based on classical physics. Infinitely small points are classical. They don't exist in QM. When you study what the math axioms say , they all use thinking in terms of Newtons classical world. So it is really a mirror of classical physics. QM is based on this math but if they develop a quantum math , it might work a lot better but currently no one is working on this. This kind of quantum math would have numbers that would be forbidden. Like between zero and one, there would not be a continuum of numbers, there were be gaps. There really has to be gaps because every single thing in the universe is a large wave function and it would have quantized everything, including allowed numbers. Zero cannot exist because of the uncertainty principle but in Newtons physics you can have zero. You just can't have an infinitely small point in math or any other thing because of the uncertainty principle. Your brain and thoughts and ideas are resting on top of a quantum wave function. They emerge from this complicated structure. Mathematical space is a mirror of real space. Numbers are the same as discreet particles. But we all know discreet particles don't exist , they are just high frequency waves in the quantum fields. True and false can co exist in a superposition as in a superposition of an electron that is either spin up or spin down. The philosophers forbid this but it really happens.
All those words used after acceleration sound like someone with an odd sense of humour named them. I have never heard them. But I am also not from an English-speaking country. Clothoids sound made up. When I have time I will try to derivate and visualize this (and read it up).
@@andrewhotston983Axion laundry detergent for emergent fundable but not demonstrable physics. Chase neutrinoless double-beta decay, a statistical absurdity FUBAR. Add "dark matter" searches whose utter failures fuel more looks. Physics is a walk of shame where it contradicts thermodynamics - Hund's Paradox - look. The punctilious computer crunch was a molecule with near no barrier to racemization, HSSH. Try single enantiomer twistanone or especially D3-trishomocubanone. >C=O TO >CF2 is facile. Have QM invert a *pentacyclic* 11-carbon hollow ball of 8 chiral centers. Tunnel that. Pentacyclo[6.3.0.0^(2,6).0^(3,10).0^(5,9)]undecane
The book O Universo em Retro Expansão, in Portuguese, speak about Scalar Domains. It suggest a table crossing all physic laws by their applicability. What do you think?
For me the universe seams fundamentally irrational. Logic and especially math can't accurately describe that system. But math is a far simpler language of description, even if it doesn't work well.
I know it's not really relevant, but I appreciate that you carefully avoid saying "begs the question" and instead come up with a new and clever way to say something like "leaves us with the question." Such a small detail is refreshing in a sea of UA-camrs who misappropriate the other phrase.
My main concern is how to survive all of these financial and political crisis, especially in light of the US political power scuffle. The government has really called things more difficult for its citizens, and we can't sit back and bear all the consequences of the bad governance.
I had a lot of financial crisis then, I can recall when I was homeless and faced with many things in life until $75,000 biweekly began rolling in and my Life went from A homeless nobody to a different person with good things to offer.
I thank Laura Jennifer Reeves who has always been there to help me with detailed analysis and recommendations that I would not have had access to otherwise.
YES! That is exactly her name (Laura Jennifer Reeves). I saw her interview on CNN News and so many people highly recommended her and her trading skills❤️
“Thank you, Dr. Hossenfelder, for your thought-provoking video on language evolution and the unique periodicity of mathematics. Your insights into how languages follow distinct curves, while mathematics remains cyclic, really resonated. I think I’ve come across some observations aligned with your previous works, that could shed light on why mathematics might be inherently cyclic. I shared these thoughts by email-nothing earth-shattering, just some details linking your perspectives. I’d be honored if you have a chance to take a look.”
When Sabine spoke about virtually absent third derivatives of Time, I immediately thought of the Gravitational Field, which for a moving object is a third derivative of Time, but no one writes it down that way.
minor point. Zipf law was described by Estopel in 1912 long before Zipf. it was recdescribed in 1920 also before Zipf. its status as a law is debated. it is an empirical finding without a known basis. it is not alone in this. Tsylors law and the species area laws in ecology are also widely applicable but have no know basis. the species area law seems to apply from puddles to continents which gives it a lot of credibility in ecology but without any idea why it applies
Pink noise has 1/f fall of in spectral amplitude. 1/f fall off includes many diverse geophysical phenomena such as earthquakes and atmospheric conductivity to name but two. Earthquakes is a good example, large ones always occur less often than small ones, following a 1/f law. Words are discete and earthquake energy is continuous. Perhaps lanuage is a form of pink noise, or pink noise is more meaningful than we now think.
@andrewhotston983 Are you theorising that only white noise exists, and that there is always some additional filtering/amplification process involved that produces a noise power spectrum other than the flat one for white noise. Noise is always noise, i.e. not a thing other than noise, until we investigate and understand its ultimate origins.
In dynamics applications "jerk" (third derivative) is sometimes used as an effective cost function or as part of a regularization scheme. Control systems often work remarkably well if jerk is minimized. The point being that as sensors become more and more accurate, they become increasingly sensitive to disturbances and often require active control to get the most information out of their measurements. I suspect that higher order math will increasingly become more important in the future. It would be interesting to see if the law of numbers has changed over time as more sophisticated applications have been developed.
Sorry about the buzz in the audio. This is the last video from the affected batch -- buzz will disappear tomorrow.
I didn´t even notice, that´s the advantage of getting old😉
I'm not aware of any issue.
I always get a buzz out of Sabine's videos! 🫨
There's also audio?
The bees are in my ears! Get them out!
What’s missing, and I’m surprised Sabine didn’t pick up on this, is the relative complexity-increase or “notational compression” that is achieved between addition/subtraction versus multiplication/division versus exponentiation versus entire Taylor series (trigonometry). If those can be graded against one another then I would think that their ranking forms another log-log curve. After all, mathematicians wouldn’t have been compelled to devise a higher order operation if it didn’t provide at least an exponential advantage.
Yup. Even more surprising considering how often she likes to pierce a new one when others make mistakes.
There is a paper here or there about it and AI modeling iirc. Not at a suitable computer to look for it.
@@MN-vz8qm This is hardly a mistake though, it is a fun fact that adds to the video.
Yeah... I think the point Sabine is making is observational regarding what maths guys persue like what physics guys persue.
Although it gives us impressive looking tracks and logs etc. which is nice nuu candy, its stiil a bit like what Pope Francis calls "Decadent Monasticism".
Brilliantly said. I hope Sabine reads this comment.
Physicists may not care about third and fourth time derivatives but mechanical engineers certainly do: jerk and snap
Static engineering: 😊
Dynamic engineering: 💀
Crackle pop
Yes , I have always wondered that if acceleration is the second derivative (or rate of change) of the objects position with respect to time , and acceleration is related to gravity then: What does the 3er derivative of position with respect to time (or the first derivative of acceleration with respect to time) will be able to tell us about gravity or else.
@lewebusl gravity is largely constant within the human scale. The trouble is transients like hitting a bump with your suspension while turning. Or in electrics with microphones and the resultant signals. It's taking an input and converting it to a desired output that sets the engineering mind ablaze. Outputs where the car doesn't spin out or the elimination of hiss of the mic by passing it through a noise filter processor. There is A LOT of math involved.
Your passengers certainly do!
I find Stephen Wolfram's answer to this, regarding computational irreducibility, the Ruliad, and we being the kind of observer we are, to be quite compelling.
I've also gained loads from Wolfram's insights and have deep respect for the guy. Speaking math, however, we need to be as precise as we can. When Wolfram speaks about "computationally bounded observer" (like us), the expression in that context means actually "statistically bounded observer".
Wolfram does very much wonder and even little speculate about what other kinds of observers could experience a slice of Ruliad in other computationally bounded ways.
Traditionally that question has been approach by the scientific methods of the shamanic arts. I consider practicing geometric intuition, especially when it requires a bodily self-transformation such as shifting your perspectictive and attention e.g. to that of a flat lander, a shamanic art.
The idea of Ruliad offers new language to discuss, contemplate and practice such geometric and computational shapeshifting. Obviously, adopting the perspective of a flat lander is already a major mold breaking in the Rulial space for a being used to 3D perspective.
When trying to get a better grasp of relativity, Pure Mathematician Norman Wildberger approached to question by adopting the perspective of a 2D bat relating externally by echolocation. Norman told that for a good while he practiced echolocation by making clicks and trying to find his way around his home the bat way. .
I read a book that stated marine biologists studied whale calls and found the sounds also follow Zipf's law.
What's the title?
Moby Dick
That's way awsomes. Thank you for throwing that in there :)
Most likely, whales also have a form of spoken language
@@Blodada3262put the comments words into google and volia you get a answer
I'm glad to see your return to form, Sabine. These are the types of videos I'm here for.
Meanwhile Sabine at home: I QUIT ACADEMIA BECAUSE PHYSICS IS IN CRISIS: NOBODY SEEMS TO UNDERSTAND WHY THIS EXPONENTIAL LAW HOLDS.
About Power Laws on Physics : It is PROBABLY due to the Weierstrass Theorem that ANY continuous function can be numerically approximated at any arithmetical degree by a DETERMINED Polynomial function with integer powers (if perhaps with non integers powers as in Runge Theorem in Advanced Complex Analysis) ) . If you loose continuity of the Objects, but keep well defined stochastic properties , you can loose significantly Power behaviors and get instead Exponential and power behaviors mixed see : Modern Physics Letters BVol. 17, No. 13n14, pp. 733-741 (2003) Research PapersNo Access
On the Statistics of an Ideal Gas with Mass Fluctuations and the Boltzman Ergodic Theorem: The Combined Boltzman-Tsallis Statistics
Luiz C. L. Botelho
An interesting one is that energy is typically related to the square of something. And that doesn't seem to be arbitrary. For example, if energy were proportional to any other power of electic and magnetic fields, interference would not be compatible with energy conservation. And the energy being proportional to the square of velocity ensures that the energy is independent of direction. However I have long had the feeling that there's something deeper to the "squareness" of energy.
linear for grav potential mgh but then kinetic 1/2 mv^2 fields vs motion ? Interesting
This is why the quantum relation E = h f , energy proportional to frequency, was challenging for the early theory of QM. If you are familiar with the Schrodinger equation, this E = h f is the reason that it has a first derivative in time, instead of 2nd derivative like a classical wave equation. Obtaining an equation with wave solutions but only a first time derivative is what forced the inclusion of the imaginary unit i in Schrodinger's equation.
When you have a process p that acts on a state x; if process p is at all a function of x (if p must use the value of x in any way), then it must lead to at least a "square" relationship.
@@kylebowles9820 Which can also include an equilateral triangular arrangement:
1
234
56789
10111213141516
The Pythagorean formula works equally well for square areas as for equi-triangular ones!
Potential *ENERGY* U=constant/r with r being distance for almost any field
I‘m not really surprised by this result considering the fact that statements and proofs in definition-less logical systems are known to grow exponentially in length. The exponential distribution of terms is probably just a consequence of us combining concepts and values into definitions (i.e. giving them names, instead of writing them out in most basic terms over and over again) and probably holds for (almost) all mathematical texts, not just those dealing with physics.
you mean by logical calculus (application of the rules on first principles / axioms )? Could you explain in more detail? Do you have a good reference? Thanks!
I think that's an interesting insight about defining combinations of concepts into another concept leading to the exponential distribution in mathematics. For example, one can see that the need to use many addition's is saved by defining the concept of multiplication and the need to use many multiplications is saved by defining the concept of exponentiation.
However, natural languages also define a word to designate a concept that consists of other words designating other concepts that combine to form the meaning of that concept. So, why does the distribution for natural language words usage differ from exponential?
@@luminiferous1960 I like your thinking & the question you pose. I cannot define what I feel, but something nudges me to think of it as an evolutionary survival strategy innate in humans. If natural languages were as laser-focused & precise as mathematical languages, would there be less diversity and a higher chance that events could do more damage to a population? Natural languages being subject to interpretation, people will differentiate and more scattering happens, reducing the chance of catastrophy decimating *all* of the now-separate groups... ?
@@phononify I read about it in the book “Type Theory and Formal Proof”. There is a question on stackexchange discussing the passage in the book which I cannot link here but you can find it by searching for "definition-less mathematics". The comments and answers there might guide you to original sources.
@@luminiferous1960 I think the reason is a combination of the facts that natural languages feature both redundancies and ambiguities and that they were created bottom-up by an evolutionary process instead of top-down guided by the idea to have parsimonious and consistent languages to discuss all of logic/mathematics.
physicist : yeah we never have fractional exponents.
physicist : yeah , so lets multiply function by exp( i*2*pi*t) and integrate it.
"If you don't like your number, just exponentiate by i."
-Euler (Ok, I made that up)
@@pierfrancescopeperoni Actually, it WAS Euler. He pulled it out of his ass also.
There are also fractional dimension integrals in QFT. Fractional INTEGRALS, man
LOL. But fourier transforms and series have nothing to do with fundamental laws of physics, it is a purely mathematical syntaxis.
Also, *t* is a matrix
Concerning the lack of third derivatives, I believe that the Ostrogradsky instability provides the beginning of an answer : in analytical mecanics, the equations of motion with third derivatives leads to non inferiorly bounded hamiltonian, which means instabilities because the system tends to be in the lower energy state (an energy of -infinity in this case).
Kind of morally similar to why gravity and EM follow an inverse square law: Othewise orbits would be unstable, and so we wouldn't be here asking about it
Engineers have a name for the third derivative with respect to time. In German they call it "Ruck", in English Wikipedia says it's called "jerk" or "jolt". They say that too much "jerk" can destroy certain kinds of machines and must be limited.
@@amigalemmingnot sure about machinery but imagine escalator starting with a jerk. Absolute force may be low but applied without giving passengers any chance to prepare and you have recipe for disaster.
I love how you sprinkle in bits of sarcasm and humor into your monologue with a straight face:))
Your "two time derivatives" comment at 5:00 brings to mind a comment in C. Hartley Rogers book "Theory of Recursive Functions and Effective Computability" - "Almost all statements which (I) have been extensively studied by mathematicians and (ii) are known to be arithmetically expressible can be seen, from a relatively superficial examination, to have quite low level in the [arithmetical hierarchy]. As has been occasionally remoarked, the human mind seems limited in its ability to understand and visualize beyond four or five alternations of quantifier." So Rogers is pointing to limitations in the ability of humans to conceptualize nature as the reason for some of the regularity.
Yes, that makes a lot of sense. But also among the ones that we can understand and visualize, nature seems to only use specific ones.
@@SabineHossenfelder That's rather fortunate for us. We're having trouble enough dealing with a single nature of reality.
math and physics seem to favor simpler layers-as if our minds impose natural limits on how much complexity we can truly grasp.
@@Pure_Science_and_Technology It's not about "grasping". It is about what can be done with those laws. We stick to the things that can be computed for practical purposes.
In other words, we are too stupid to come up with better ways of expressing things.
Sounds more like a sort of p hacking to me. Constants is one category, but variables are split up (x gets its own). Square is one category, but so is exp (which also includes ³ etc I assume). There are also additional options to change the categories (e.g. make numbers to digits) --> Changing the categories a couple of times and you will find something which fits a neat curve.
"but so is exp (which also includes ³ etc I assume)" I'm neither a physicist nor a mathematician, but I certainly can't rewrite x^3 as something involving exp(x); not without canceling it out that is. log(exp(x))^3 works for real numbers.
That said, "Constants is one category, but variables are split up" does look fishy.
@@gernottiefenbrunner172 the initial poster meant x^3 = exp(3 ln(x)), I think; (and in general x^a = exp(a ln(x)); this is actually a usual definition of x^a for irrational a's!) You don't really need powers if you allow exps and logs.
@@lism6 Wow, that's actually simple enough to make it embarrassing I couldn't do it, even though I didn't do much math after I failed at school.
Especially since I do recognize this representation from the formula for the derivative of a^x
I assumed they'd be using exp as a general operator for ^; Independent of whether it's e^x, x^3 or 2^x; I now skimmed through the paper and it seems like they are using exp for e^sth, have defined ^2, ^3 and ^4 as special oeprators and all other sth^sth are considered pow. So I guess my assumption was wrong, but the reality is not less arbitrary :/
They are also excluding any functions which include integral or differential operators without giving any reason why... They also defined +, - and neg, I guess you could get rid of either neg or -.
Some people are freaked out by deep space. Some people are freaked out by clowns. I'm freaked out by the idea that our brains are formed in such a way - that some of our observations are just as likely to be a reflection of our own process patterns as they are to be objective truths. I think sometimes we're trying to study the hardware of the universe, but unknowingly study the software between our ears.
This beautifully captures my feelings and is evocative of deeper thought :)
The only problem is to find out which is which... or if it doesn't matter. Like with the anthropic principle.
Loved the example with a pond. Where pond states, that the earth beneath was made perfectly, just to fit it's shape 😊
Another way to put it is that in order to fit things in the limited storage between our ears, we create a lot of unintended compression artifacts, which we then fail to identify as such.
yes this universe is simulated and it will restart soon like a sims game (like the sims on ps2 with the grandma)
I would call this the measurement error or bias in the instrument that we use to determine all of this; i.e., the brain.
It's like the statement, 'Any portrait is a self-portrait.' That doesn't change the fact that it is a portrait.
Easy explanation:
Natural languages have low entropy while the language of mathematics has high entropy.
Natural languages contain a lot of redundancy and are very tolerant of a wide variation in pronunciation and spelling. They could be compressed significantly without losing any information.
In contrast, the language of mathematics is already highly compressed and almost not at all error-tolerant. A symbol added or changed gives a completely different meaning.
This difference of course affect the frequencies of used symbols or tokens.
It would be interesting doing the same study with the language of biology, DNA. As far as I know, this language also has a lot of redundancies, which makes DNA error-prone to a certain extent.
On the surface, this would look like an interesting deviation from a known pattern. However, given that they only used a very specific subset of equations, I wonder if this is a case of sample bias.
Possibly, but their sample is in three or four very different areas. It would seem that the authors did at least try to account for this.
They aren't equations they are operations, equations are built using operations and variables. these are the most common building blocks of equations that are shown
@@NemisCassander lectures from a notable physicist, a list of equations named after people and a body of written equations? No they are very similar samples and none of them are direct measures of nature or natural systems. It's the answer you get to the question "How can I publish my paper without doing any fieldwork?"
If math is self-referential (it is), then everything can be derived from one or two numbers (relations), which is the definition of Zipf's law for distributions. Listen closely: everything grows exponentially from the speed of growth, OK? How is this finding any surprising then? STRUCTURE is what's more important (and I have Some)
I have a problem with the graph presented at 3:23: there are only 12 values. Any ranking will exhibit some kind of decrease, but this only becomes interesting if there is a huge dynamic range, like there is in the ranking of word order, or, indeed, ANY quantity that exhibits a huge range of values, like city populations, sizes of lakes, etc. Those rankings also obey Zipf's law.
Its still a valid observation imho. There are substancial quantities involved.
So? Lots of systems that you can regard as language have much fewer "words" than English. Programming languages for instance. And those surely comply to Zipf's law.
@@perpetualrabbit The distinction I'm drawing is between lists that have a huge number of elements, like a spoken language, and the short list of 12 elements in the graph. That list does not have enough of a dynamic range to admit of a meaningful scaling law. By dynamic range, I mean the range of frequencies of occurrence of the elements. For English, this ranges from about 2.4% for "and" to at least as low as about 5 x 10^-8 for "zoroaster" - in other words, a range of about 10^7 or higher. The dynamic range for the symbols in the graph is only about 30. However, it must be said that there are a lot more symbols used in equations than the 12 in the graph. I'd like to see the whole set.
I know someone who takes the 3rd time derivative of position, what a jerk!
That's an S-Tier joke!
I understand, and I still don't think it's funny, but this comment is haha
Don't worry @@shipsahoy1793. I don't understand and even i don't find it funny
Deviant!
What is the third time derivative of position?
There have been several experiments where artificial intelligence (AI) systems were tasked with discovering the underlying laws of physical systems without any prior knowledge or predefined equations. The results were surprising: the AI identified entirely new variables and equations that differ significantly from traditional human approaches. This suggests that there may be multiple ways to understand and describe the laws of physics. It seems that the symmetry discussed in the video may stem from the fact that both the equations and the language we use are human constructs, shaped by subconscious patterns such as efficiency, beauty, and golden ratio.
There are generally infinitely many different ways to mathematically formulate the exact same theory. This is why we have so many interpretations of quantum mechanics. It is curious, if you think about it, that physicists do not have multiple interpretations of other theories. (Or to the extent that they do, they didn't spread very far.)
@@SabineHossenfelder It might be interesting to submit these reformulations, and perhaps a random set of other possible reformulations in order to remove the human element as much as possible, and then see if the exponential law still applies.
In the ability of calculating, humans are in the lowest third of all species! That there are people believing in Einsteins bullshit and trigonometry is just using tables, is a proof (logical=mathematical).
i would like to see these experiments, could you provide a name
@ isn’t it embarrassing to ask for “Allgemeinwissen”? You could watch almost every documentary about animal-skills. Humans are in competition with ants if it comes to calculating things. We are just doing abstract nonsense like LLM
I don’t know if it’s fair to call this a power law of power laws, but I might be thinking about this the wrong way. But both Zipf’s law and this one are dependent on representation, as you mentioned. That means they’re more “laws” in the sense that they describe human behavior rather than natural laws. It feels a little bit like a conditionally convergent series in that way, where the same series arranged differently can be made to sum to anything, whereas a natural law feels like it should more elementary and essential, like an absolutely convergent series where changing the order of the terms doesn’t affect the sum.
That’s admittedly a pretty hazy analogy, but still I think it’s apt.
We can see that the more familiar operations are more commonly used than the less common inverse operations: multiplication vs. division, addition vs. subtraction, square vs. square root. And squares are used to replace multiplication, anyway. This is likely to be due to human preference. The former are easier for us to use, so we often express things using these operations. Also, we tend to factorise to make expressions use these operations:
A - B - C - D will likely be written as A - (B + C + D) for convenience, and A÷B÷C÷D --> A÷(B×C×D). I think that largely explains the relative prevalence of inverse operations.
Regarding the different types of operations, it probably depends on how we think of the world. If we consider the world in terms of waves, we'd see a lot of sin, cos, and exponential functions. Otherwise, sin and cos mostly appear for angular relationships, and exponential functions would be largely from statistical mechanics.
Also, how many addition operations are counted in an infinite sum? Or for the total contribution of a large number of particles (total force applied by a mole of atoms)? I'm guessing it's the handful that are written for us to get the idea.
I suspect the reason why we don't see third derivatives and higher is due to Fourier's Principle. Every smooth curve can be decomposed into a series of sine and cosine functions, and repeatedly differentiating these functions will get you back where you started, up to a scale factor. I also think that this could be used to simplify a lot of complicated mathematics used in physics. For instance; starting from Riemann's curvature tensor and working out it's derivatives repeatedly until you get the scale factor, then equating this scale factor to one, might show a useful constraint on the value of the cosmological constant. Tho I don't yet have the mathematical skill to work this out fully myself.
Fourier series are periodic, which limits their application (e.g. in General Relativity the solution will exist for all t > 0 and not be periodic in general). Regarding your idea of a scale factor, sure for f(x) = sin(3x) you could say the scale factor is the constant in f''(x) = -9 f(x), but suppose you have g(x) = sin(x) + sin(3x) , what is the scale factor now?
@@iyziejane Those are very good points. For the first, the big bang, and possibly other bounce events would be cyclic. For the second, the scale factor in general cases gets complicated. I think it might be something like a power series or Taylor series. This is why I admit not having the maths chops to work it out myself. I think tho that such complex finctions could be simplified in the way that Tayor series often simplify to easier to manage functions like exponentials and the trig funcions themselves.
@@eddie5484 Thanks for the reply, I'm a physics professor and I enjoy considering student / novice ideas. Fourier analysis occupies plays a large role in quantum mechanics. You may enjoy reading about a generalization of Fourier series called "Sturm-Liouville theory", which shows that 2nd order linear differential equations of a certain type give rise to families of functions that can represent any function in the sense of Fourier series, y'' = -y and the trig functions are one example, Legendre polynomials are another, and most special functions in physics come from this construction. I also suggest to read about chaos theory, with nonlinear differential equations, which is where the fourier technique strongly breaks down.
@@iyziejaneYou are not a physics professor otherwise you would know that GR is mathematical nonsense.
The fact that you don't understand this proves mankind still hasn't evolved past the caveman/flat earth/stationary frame view of the universe.
Dont you think it odd that after a century worth of experimentation, GR and SR have failed to be validated.
Didn't Einstein himself say that the laws of physics are equally applicable in ALL frames of reference. How do you apply both GR and SR to the same frame.
The fact that you can't get QM to align with GR, or GR to align with anything for that matter, should clue you in on the fact it's BS.
@@stewiesaidthat In fact I basically agree with you that GR introduces too much nonsense, that the word salad about dynamical spacetime holds back progress in fundamental physics, that the higher order nonlinear terms of GR have never been and probably will never be verified, that it clashes terribly with quantum mechanics, which was the real physics revolution of the 20th century. That the main attachment to GR is a sociological one, because it is central to Einstein's hero story, and for that one man's ego and the culture he represents, physics has to suffer for 100 years. Is that good enough? I'm probably in the 0.1% of legitimate theoretical physicists who agree with you this much. PS Einstein plagiarized everything all through his career, not just the well known examples of relativity.
Excellent phenomenological point 0:43 - that ideas are a form of "reality." There is "real-reality" (physics, chemistry, etc.), social reality (our cultures), personal reality (our beliefs), virtual realities and sensoriums, etc. There are natural and what one might call "artificial" sciences and associated technologies, both of which can be studied, replicated, theorized about, and evaluated using mathematical and statistical and even philosophical methods and tools. Examples of the latter might well include architecture, jurisprudence, history, maybe non-Euclidean geometries, things that might not be studied if not for human curiosity, interest, and imagination (e.g., aviation), etc. Science fiction has a habit of turning into real factors in our worlds.
The underlying law may simply be a product of the stability of the phenomenon being described. Stable things appear more prolific. Prolific things are more likely to be investigated and described. Described things appear more important, and those are the ones that appear on the Graph.
What's crazy is this same distribution is present in word usage, which is supposedly an arbitrary action.
@@bartsanders1553That might be simply related to word length, as shorter words are likely to be used more frequently, as well as easy to learn and pronounce syllables like Maa and Paa.
Stability, nice. A likely candidate for the fundamental mechanisms at play.
@@picksalot1 It actually spreads out more than random selection. Single letter words ought to rank higher in general than multiple letter words, but it turns out to be not the case.
@@bartsanders1553 According to AI, the 10 most frequently used words in English are: "the, of, to, and, a, in, is, it, you, that."
Last video i commented about the need for the scientific community to do an analysis of the process of science. A second effort i think we need is to build a cohesive encyclopedia of established knowledge. The question yesterday was "how can we refine the process of science to do better?" The question under this exercise is: what cross-disciplinary facts are we missing that point at the next layer underneath our broad understanding of reality?
Intriguing. Thinking of math as a language, couldn't it evolve into something that literally cannot expose or solve certain problems? I could see where alien math might be able to solve problems that human math can't and vice versa. There are some languages on earth where it is very difficult to express certain concepts, while in other languages those concepts are relatively easier to communicate.
We have both verbal language and symbolic language to refer to and describe math. Math itself is not a language.
Conceptually, math isn’t a language. There are mathematical concepts you can use for communication and even consider them languages, but that doesn’t make math itself a language the same way colors or resonances aren’t a language.
@@johnrichardson7629 IDK, I disagree that math isn't a language. For decades (centuries?) in Universities, Math was part of the faculty of arts and not science. It was considered a language. When I studied Engineering, it was well drilled into our heads that mathematics was the language that engineering is taught in so the complete immersion in mathematics (rather then engineering concepts) the first few years was required first, then engineering principals came after you could speak the language.
This is the difference between mathematics and *Mathematics*
@wally7856 None of that makes it a language. We make mathematical statements that refer to mathematical objects and their properties and relations just like we make natural language statements to refer to things and their properties and relations. Surely you recall being taught the difference between numbers, ie the things referred to, vs numerals, ie the symbols we use to refer to them.
Some of the simplicity comes from the structure of spacetime, it has 3 dimensions on macroscopic scales, and it's precisely flat, except for subatomic and cosmological scales. This is reflected in the laws of physics, which then preserved through all the emerging higher layers of reality.
The reason for why spacetime is like that is mostly explained by inflation and the anthropic principle. Btw. if you want proof for the inflation model, just check the prices at your local supermarket.
The other reason is that we use approximations, linear mostly, which hides the ugly parts in most cases. And if this fails, we just through up our hands, and use numerical models instead. Those are the same simple approximations, just used for each small bit of the problem, then aggregated.
The book, "One, Two, Three, Infinity" shows this very well. I made my kids read it when they could understand it.
Many real-world relationships are approximately linear, in isolation, and we tend to look at things deconstructively rather than holistically to better understand them. By holding some things constant even when they aren't, we gain insights into the components of complex systems. Describing the whole system in a single equation isn't impossible, but it is very hard to fully capture complex dynamics in that way.
Thanks for that insight.
I would add that when taking the first steps beyond linear approximations to understand the complex nonlinear relationships, scientists most often start with expanding a nonlinear relationship into a power series approximation so that the contributions from each higher order term can be determined.
This is also a reductionist (or as you phrased it, deconstructive) rather than holistic approach to understanding complex nonlinear relations, but it has been very successful in modelling complex nonlinear dynamics in many scientific and engineering disciplines when the contributions from higher order terms diminishes rapidly with increasing order.
In defence of the decomposition approach, if you take the view that the most fundamental things are small, then it makes sense to start with the small and build up.
@luminiferous1960 yes reductionist was in fact the term that was evading my memory at the time. And I agree with the tendency to work around limitations of existing solutions to extend them, rather than looking for a more fundamental correction of early approximations. It's a huge problem in economics.
@audiodead7302
"Those who would destroy a thing in order to understand it have left the path of wisdom"
The context is Gandalf admonishing Saruman who put on a new magical, opal-like vestment and rejected the color white, which once represented him. He revealed himself finally as Saruman of Many Colors. He wished to become a power like Sauron and become himself a ring-lord and ring maker.
Chemical engineering took a major advance when an Indian found many chemical behaviors follow cubic functions (s-curves), which I doubt are in WikiKeepItSimpleStupid.
You forgot occams razor, interpreted as elegance is truth in physics, leading to supersymmetry, which is just that elegant math for physicists
Humans evolved in this universe through the interplay of its physical laws. Therefore it isn't surprising that we should create maths which also reflects those laws. If you sow beans then you'll get bean plants, not pineapple trees!
I hope this makes sense... We arise as part of the universe and our activities produce results which are also part of, and so in accordance with, the universe.
That's only because pineapples don't grow on tree's. If they did then maybe you would get some?
From an orderly early universe to a more chaotic disorganised one. Entropy wins in the end. I think these things are a symptoms of decay. Nearly inevitable.
This doesn't make sense. What's the alternative? That's what you're trying to say. That there is an alternative and it is impossible. What is it? That we get transplanted here from another universe, and then... Then what, instead, exactly?
Kind of like how life arises out of some of the most common elements in the universe.
You may have that the wrong way around. The universe *as we understand it* reflects the mathematics we use to create it as a structured entity out of mere sensation.
Read yer Kant....
Dear Sabine,
Thank you for brining much needed critical insight along with unparalleled humor to subjects I love but have fallen away from due to absolute boredom or unintelligible noise of popular discourse and misleading news sources.
I am an artist and developer. I love science (especially physics.) Without humor, I don't think the journey toward knowledge/discovery in any field is worth the candle. Thank you for bringing a brilliant level of humanity and artful discourse into my almost daily mourning routine.
One of the more interesting topics you have approached.
Thanks!
Could you please make a video about measurement? When you talk about the problem of measurement in science, do you mean a quantum observation event, the physical instruments that gathers data, or combining data into a metric space?
Sabine , probably (I SUPPOSE!) the base of this regularity in Mathematical Physics is rooted in our GEOMETRICAL EUCLIDEAN idealizations of the Physical World (EVEN WITH RIEMANNIAN STRUCTURE AS SUPPOSED IN GENERAL RELATIVITY or usual Strings and quantum field theory ) .For instance , in DIRECT PROBLEMS Classical mechanics , point particle trajectories are "perfect" geometrical curves in the ambient space (smooth and differentiable) . And there is a theorem very deep in Differential Geometry called the Fundamental theorem of curves that say that is enough to characterize curves COMPLETELY by using at most its second derivative ON CERTAIN VECTORIAL CURVE'S OBJECTS (The normalized tangent vector -its "velocity". Its normal and binormal -its "vectorial acceleration" in R3 -THE SECOND NEWTON LAW SAYS THAT THE FORCE IS A VERY SPECIAL KIND OF VECTOR ; ITS IS ALWAYS IN THE OSCULLATING TRAJECTORY PLANE .NO COMPONENT ALONG THE PARTICLE'S TRAJECTORY BINORMAL!!).) ! . The same mathematical behavior of second derivatives objects for surfaces, three dimensional volumes , in General Euclidean Manifolds, etc.. . So it could be of no surprise or not natural that Newton-s Law for motion only uses second VECTORIAL derivative for determine particle trajectories .By the way , Kinetic energies only involve first vectorial derivatives . Kinetic Energy is always quadratic in first derivatives (The minimum principle lead us to second derivative for motion equations ). Higher order equations could occurs in Classical mechanics always as 'Dissipative" laws -Friction . Of course if you have "Fractal trajectories" like particles moving in a turbulent fluid which have only its position as a meaningful geometrical object (as a Newtonian Particle) and not a well defined point wise Newtonian velocity or acceleration as a non random motion , you must use now an object called Stochastic Ito derivative which has meaning only for the "first" derivative!!(It is a sort of a kind of ...integral!) . And about Higher derivatives in Mathematical Physics ? .They exists , but they are very! , very ill behaviored for usual Physics !. In Inverse Problems they proliferate . In Geophysics , the main problem on seismic prospecting are the famous Ill conditioned mathematical problems (lost of inversibility and continuity on the underlying Math ) .In my time as a Mathematical Geophysicist , I was studying Random Transforms for inverse seismic imaging problem which had the promisse to be less sensitive to mathematical ill possednes ! .Totally different framework from usual Theoretical Wave Physics ! . See the following (old!) venerable Book on ill posed problems - "Solutions of ill-posed problems" , Andrei Nikolaevich Tikhonov and V Arsenin . -1977 . & Feynman path-integral representation for scalar-wave propagation
Luiz C. L. Botelho and Ricardo Vilhena
Phys. Rev. E 49, R1003(R) - Published 1 February 1994
Great video, Prof Hossenfelder!...but I wonder if we might be making a bit of a leap by suggesting that this statistical pattern in physics equations is a "Law of Natural Laws"--as stated in your video title and "Meta-Law of Nature" in the original paper.
The exponential decay pattern the authors found seems to reveal more about the structure of mathematical equations in physics rather than about nature itself. It’s fascinating and valuable for symbolic regression and other computational methods, for sure-but it’s also possible that these patterns result from human conventions and preferences in equation-building, rather than any fundamental aspect of nature. We naturally simplify equations with integers, familiar functions, and constants. So maybe this meta-law tells us more about the language we’ve created to describe nature than about the natural laws themselves.
Perhap, "Scientists Discover a Pattern in Physics Equations-Could It Reveal a Meta-Law?"...might be a more accurate title?
this law about word frequency i accidentally discovered it while creating a chatbot from scratch in luau. I use the inverse sigmoid to determine word weight along with emotional tonality, and entire context weighting. Then I activate the result with log and it works amazingly
thats interesting, thanks for sharing that
Kool. I was just thinking about some of my own experiments with it while watching the video :)
Non-uniform distribution of language elements is the basis for encryption/decryption and Huffman compression. The idea has been around for quite some time (centuries maybe?).
One of the most technically important areas of engineering involves varying fractional powers - adiabatic expansion and compression of real gases. It's a consequence of the different degrees of freedom of different molecules, along with different gas mixtures and dissociation.
That's funny. I remember studying physics and engineering dynamics, wondering if there were any processes that needed third or higher derivatives to describe said processses. Found myself so busy studying and working that I never found time to contemplate those sorts of things any further and forgot all about it.
Higher derivatives are used in mechanical engineering when dealing with reciprocating machines, cams, etc. The third time derivative of position is called "jerk".
You've found a nice retirement project, I think! 🙂
@@patrickgriffiths889 Indeed, neglecting the third derivative of position in cam design is a good start on premature engine failure.
Pure math is about patterns and relationships in the way you count, and account, for something (and even where nothing can be something).
The question is whether those patterns and relationships map in a useful way to what we can see and hear and smell and touch and taste and in any way experience.
It is nice to see natural laws being described by "natural" maths. It is always weird when you read that someone proved the existence of 15 dimensions or something. I mean did you prove it, or did you have some crazy theory and you needed to invent 15 dimensions to make the math work?
The additional dimensions are needed for string theory (i.e. to align any kind of prediction from that theory with reality). But string theory is very speculative. So you can safely assume that there are no additional dimensions beyond those that we observe (i.e. 3 space, 1 time dimension).
Young lady, you seem to pick the most fascinating aspects of physics and the related technology and boil it down to a succinct and clear presentation. It's fascinating how you follow the links of these concepts out into seemingly unrelated fields like language.
Sabine is a wonderful counter-example to the stereotypical hypothesis that Germans do not have a sense of humour.
For a relation that might reveal deeper physical truths, many of the categories should be amalgamated. In particular the *** and *÷* operators should combine, the powers and roots should combine, logs and exponents should combine. *X* should be amalgamated with the variables, not have its own category. All trigonometric operators should be combined. Initially I thought the same of *+* and - but provided equations are reasonably simplified (e.g. write - and not +- ), I no longer do.
What I'm saying with this is that operations that are inverses should be amalgamated because at the lowest level they are the same type. Expressions can often be rewritten to use one or the other. Now I'm neither mathematician nor physicist so I could be slightly off in some details, but I am more interested in the origins of physical principles and their relation with mathematics than I am interested in human tendencies in this context.
Benfords and zip laws are merely the principle of least action. Scaling is involved too
Huh?
any source for that claim?
I read the paper and the authors say they left out calculus completely from their analysis. So this analysis is just a start.
5:18 I believe this is the case
4:55 Ostrogradsky Instability is a possible explanation, as having 3rd derivatives (or higher) of position with respect to time in a theory of classical mechanics causes a Hamiltonian that is unbounded from below.
For over a decade, I've been telling people about how Zipf's law explains many things, from matter distribution in the universe to economic inequalities. Nice to see it's finally being explored for real.
does the law actually explain anything though?
@@user-sl6gn1ss8pNo, it is only a descriptor, not an explainer. The reason why we find these distributions in everything is not known.
Honest and ignorant question here: is there any relation to Paretto 80-20 law?
@@therealnotanerd_account2 Yes. They are almost identical. Zipf discovered the distribution by rank ordering word usage, and Paretto found it in, well, everything. The 80-20 principle is more application than observation, but follows the same idea.
It is a law about popularity and indeed can explain lots of things like cultural or religious distributions
I mean, for example so many force laws (& we have lots of them, so you would expect whole number exponents to be overrepresented) are inverse square because that is the nature of 3D space... to then read is as "ooh isn't it weird the exponents of things are almost always whole numbers" isn't telling us something we don't already know.
Whole numbers exponents often tell us about the dimensionality of things.
I'm willing to contemplate that there may be something deeper, but I would definitely caution about reading too much into it.
Fractional exponents are typically fractal things, non-standard dimensionality, in some senses infinite... typically not corresponding to measurable quantities.
0:09 is weirdly similar to the tree of the Sefirot. It just needs to show the root ball for Malkhut. Maybe those old rabbis were mathematicians too. :)
The Ancients, across many thousands of years, understood the universe far better than we do. We quantify and qualify, rather than interpret and integrate (with our lives).
We 'understand chemistry' but ignore that our bodies are a collective of electrochemkcal reactions, a coherence between billions of atoms acting in concert through DNA harmonics (2 Golden Ratio spirals linked by primordial hydrogen bonds).
We are star children in every meaning of the term.
It looks nothing like Sefirot other than ten entities with ten on top
They were.
Isn't it Malkuth?
There is some wonderful numerology in the Kabala. The quality of zero, the quality of one. The quality of two and so on.
Sort of predates topology and dimensional analysis.
I think the underlying structure is not only how we simplify, but also how we came up with notations in the first place and good the same expressions just get a simpler notation, if used often enough.
The famous Voynich Manuscript also follows Zipf's Law, even though it was written before the law was discovered.
That is interesting, it would be even more intresting to find out if other pseudo-scripts do or don't. Someone needs to analyse the Book of Mormon.
You probably already know this, but Ostrogradsky instability is potentially a partial explanation for why we usually don't have more than two time derivatives. Although, I think there are loopholes, so it's not a complete explanation.
The reason integers predominate is simple. If you look at a small sample of any object the count of objects will be an integer. Lets say count the number of rocks on a hillside and the number of visible rocks is 112, so the answer is 112 rocks. But this is very simplistic as each rock is different in weight, size, and location. It is the simplification of the classification process and the simplified description that delivers the integer 112. I am sure many of the rocks were chipped or eroded severely but nobody is going into detail in saying the number of rocks is 67.7 when all of this taken into account as many of the rocks might be only one half of the rock or only 3/4. Half of a grain of sand is still a grain of sand.
A transistor is a fundamental electronic component that even the edifice of AI today stands on. Not that 4 terminal components do not exist but all the critical ones are till 3.
there's nothing surprising about it. anyone who understands the following laws and relationships knows exactly what it's about: central limit theorem (normal distribution), Benford's law, Shannon's entropy, scale invariant power laws (which also lead to self-referentiality and fractals), Pareto's principle, and so on. fundamentally, this is a matter of probabilistic and statistical laws, which has a mathematical basis, but since mathematics is universal (which is self-evident), this regularity is also universal.
that is, the distribution of the first digits of numbers, the frequency distribution of letters in the alphabet (entropy), processes described by Pareto's principle (e.g. the proportion of marketing department spending that is meaningfully used), the distribution of fish sizes in the oceans, wealth distribution in economically under-regulated societies, the frequency of asteroids by size in space, and so on. why would it be any different in the case of scientific texts?
ps: as regards the two examples mentioned by Sabine, power laws in nature usually have integer exponent because our reality is (or at least it seems to be) three dimensional. e.g. the inverse square laws are due to this fact, since energy is spread out in a spherically symmetric way (at a given moment energy is distributed on a two dimensional spherical surface). of course, it doesn't mean that fractional dimensions and exponents cannot be more accurate, who knows? but e.g. 3/2 is also a pretty common exponent in physics (due to the combination of the three dimensional space and the Pythagorean theorem). and as regards the second example (why third temporal derivatives are not relevant): it has both natural and artificial reasons. the artificial reason is that we could use third order derivatives e.g. for position, but velocity (first order) and acceleration (second order) are the most important ones, so we usually neglect the third and higher orders. but e.g. "jerk" in kinematics is the third derivative of position, i.e. the acceleration of acceleration. in reality, almost every motion has non-zero third or higher order terms, we just ignore them. but why are we interested in only the first two? well, this is the natural reason: because change (velocity) and change of change (acceleration) are the most important terms. the former is always relative, meanwhile the latter is always absolute. self-referentialisation (latter) always leads to absoluteness. this is why acceleration has such a super distinguished role in physics (e.g. in dynamics, in relativity and so on).
as soon as I see "there's nothing surprising about it [PERIOD]" followed by an essay I'm scrolling. There obviously is an explanation and since you need a whole book to explain it it's obviously not that simple
I buy into this to an extent. To extent that self-referential may just imply the observations, analysis, conclusions, research publication and so forth mostly take place in human thinkspace. BUT! There are two considerations on that. Has evolution of human form also been influenced by the universe and all that is in it?
If the probability of everything occurring is a bell curve, all their deviations from the equations describing it are errors and the equations will center around describing bell curves.
@@danibarack552 being not surprising and being simple are two different things. this article is supposed to be about something that is novel and thus surprising. and no, my whole comment is not an explanation, I just mentioned many examples which are similar to this Zipf's law.
[because our reality is (or at least it seems to be) three dimensional] - Kant would probably say: We are thinking (naturally) threedimensional ... at the end we break everything down finally to these dimensions. Intresting comment by the way. Need to think about it.
Thanks Sabine, this is very interesting.
>
1:53 I wrote up some routines way back to search out frequencies of letters, n-letter groups and words, but never thought to graph/plot the frequencies. Time maybe for a refresh with better routines and some UI outputs :P
>
3:43 To reverse that, does it tell us something about the way "we" create the mathematics?
Do we instinctively follow natural rules when we create systems of explanation and communication? (does this then emerge in our language expressions where we see that filtering through.)
>
4:08 You really have sparked my curiosity. Most modern code that we use is based upon human language principles (That was the concept of BASIC and other early languages. Beginner's All-purpose Symbolic Instruction Code, meaning that it follows natural language constructs so that is a more human like form of communication with the hardware.)
.
I would be curious if these patterns exist in the lower level CPU based Assembly or hexadecimal, or are only present in the symbolic code language.
>
5:20 Personally I think the Universe is trolling us :)
*All that I can think and say is that a 0.0...1% difference can change everything and we have no way to deal with that in the current math. Sometimes even the smallest things in the universe are not necessarily measurement noise or error to be rounded out :)*
Considering the letters come from the brain,where thoughts are generated by electrical impulse function we can therefore assume the shape of the letters have also a relation as they are a result of.
I actually theorist there should be at least several languages where many if not all letters are based on mouth shapes.
@StuHol-jb1hh ua-cam.com/video/4GkeZfFX50I/v-deo.htmlsi=XzymYliVIHO5Lrgq
Mathematics is not used to describe observations but to describe interactions. We only “know” reality through our interactions with reality. Mathematics is a product of adaptation of the community of neurons that is our brain. This is a self organized system where reason, logic and math, emerges as an optimization process (optimization of our actions on reality). There is consequently an image of ourselves in our image of reality built from physics. In other words you cannot separate reality from who we are and it is an error (metaphysics) to believe the image is reality.
i have a vague memory that Noether's Theorem is a kind of law of laws. Wikipedia says ...
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law.
.. except (pouty face), i don't understand what that means.
Even the lorentz transformations(and SR) could be a law of laws in that sense. This is a reflection of Linear Algebra being the algebra of algebras.But, a true law of laws, i think, should be connecting diverse and non-analogous laws.
1:55 l love this channel I love love this channel loves the channel
Interesting thoughts. Thanks 👍🏾
I like the contrast of taking all these formulas and equations that are upheld as elegant gems, and throwing them all into a big cauldron to be boiled down to a histogram of operation occurrence
This is definatly a worthwhile study to persue
Peruse or pursue?
@@leosmith848 pursue, i just spelled it wrong
@@lukaspeciura6225 I mentioned it because either word works, albeit with different nuances.
@@leosmith848 pursue is more accurate to what i'm trying to convey
Really interesting that subtract was ranked above add! Really shows us something about how we write equations.
E.g. we're much more likely to write F = ma than F - ma = 0.
And if things get complicated with lots of terms, we're likely to define new symbols for the sums of many terms, rather than keep writing them out in full.
The buzz is the microwave background of anger from the FROG of the previous video 😂
The stability of the electron and proton are a clue as well. How well-constrained is the Hamiltonian of the Universe that describes all interactions? Maybe the path forward for physics is to elucidate the poorly constrained portions?
I simply call it good 💛🖤
Interesting paper and discussion.
That´s great and surprising content, not surprising to find it on this channel first. It shows in my limited insight, that researchers should look over their own horizon more often and think outside their box. So far, linguists like Chomsky searched for an underlying theory of grammar their whole lifes, and the question, if God is a mathematician is asked since hundreds of years by philosophers and physicists now. We just begin to understand, what´s behind it. Still great times for science, no chance for science deniers.
I am not sure "science deniers" is the concern. I would say very few people deny that science is what has allowed their car to be developed to the point where they can reliably get to work. Motor vehicle owners who are skeptical on what the impacts of using said car are not the problem. That falls back on a lack of trust in science, and as time goes on it becomes apparent that lack of trust is well deserved in some instances. If you want people on board with your science, do good science and let the facts speak for themselves. People are not "dumb." The average person has average intelligence. If you can't make him understand important science, then you haven't tried hard enough or have taken the wrong approach.
@@msromike123I agree, one addition: I sometimes use exactly your car argument in talks with "sience deniers", that science is the basis of the technology we use and that the fact, that it´s helpful and works, is a hint that science tells us some true aspects of reality. Interesting enough, this argument is quite strong and convincing. Many people who are sceptic about science don´t even think about this connection that is so obvious. It´s not a lack of intelligence, but of the perceptions of context.
Not surprising and nuanced enough compared to some other channels.
@@msromike123 Are science deniers and any deniers of good thinking - just lazy thinkers ?
Fare thee well - on life's journey.
Super interesting. Some things are probably just more useful than others. Knowledge and skills likely start with the most useful and then become more and more specialized to rarer and rarer cases/uses. So efficiency and usefulness. We say Mom first and for about a year it is our most common word but after a few years, it is greatly surpassed by "I", etc. We add first and probably most frequently and square root a lot less.
We don't have more than two time derivatives, because you can typically re-write higher order time derivatives as an interaction of fields with lower order time derivatives.
No, it is not because of that.
Didnt she say something similar at 4:25
I'd actually second the proposal that systems are key to reducing number of time derivatives. There is only the first order derivative in qualitative theory of ODEs, there is only the first derivative in state space representation in linear system control theory.
Very interesting to know about zipfs law! I believe the freq of subexpressions has to do with humans. And probably not going more than acceleration is due that they can explain many phenomena up to this derivative
00:05:29 - when Sabine says "yes, there is" I feel like a good boy, wag my tail and scamper excitedly to the door lol
Thanks for the image 😂
We already knew this. It's why gravity is pursued using field theories. They *worked* already. We could easily feed data into a neural net and spit out equations that would look like better explanations for the data, though you'd be hard off trying to decipher a model.
I feel trolled by the universe every day...
Couldn't comment without payment? Not that this isn't worth it but strange. This sort of talk I enjoy the most. I suspect there is gold among the manushia. I imagine the pink shirt being auctioned in the future. Thank you Doctor
The law of all laws is causality. All laws are causality in different forms, physical and non-physical. Physics is limited to the physical properties and processes. That's why it cannot see the full picture. Most physicists don't think outside the box. If there's no empirical data, it's not real, they say.
lol
A few days ago you commented on Wolfram. This is supportive of Wolfram's thinking; that there's a single fundamental regularity and that we can find it. He's gone off to search for using a particular approach; he might be totally on the wrong track; and as always he totally fails to acknowledge the contributions of anyone else--either anyone who has ever hit on similar ideas before or even his own co-researchers. But I agree with your analysis from a few days ago: it's nice to see someone trying to get at this presumed fundamental regularity directly.
Your husband is the luckiest man alive
Indeed😉
Math is a language, we have fractional exponents everywhere particularly in second order phase transitions, the flux of cosmic rays follows power laws with real exponents, it's also very similar to zipf law itself, we have higher than second derivatives in physics, higher derivative gravity for example, and besides that, many physical quantity are analytic function, so that means they are defined in terms of their Taylor series that contains infinite derivative. About the paper, the sample is clearly too small and the choice that they made can bias the result but it's definitely worth exploring more, and repeat the study for c++ , python, and other programing languages, Math is a language but not a traditional one so it would make sense that the law is different from Zipf
"Math is a language, ..." ... the *flux* of cosmic rays ..." " ... many physical quantity are analytic function ..." " ... Taylor series that contains infinite derivative. ..."
You have my attention :P (Especially the flux part)
.
I am someone that is leaning towards what I would describe as "An Analog Universe". Digital based math can't describe 100% accurately because it is a system based upon infinities (in a constant state of flux). ADC breaks the true beauty of analog.
I have been looking for a way to "emulate" that analog problem on a digital system. I have looked at a number of number systems that can approximate an infinite value, so I was wondering if the Taylor series has any potential in that?
If we haven't hit the bottom yet, it cannot be very far.
It's turtles all the way down, mate. ;)
@@rogerkearns8094 Tubules all the way down, otherwise you will never get over the infinite event horizon at the bottom ;)
Well that was definitely thought-provoking, I wish the video was longer. I miss your longer videos.
Unfortunately on this topic there probably isn't much more to say except its a thought provoking provoking mystery.
How do we distinguish between, "Mathematics that appear in the laws of nature," and "Mathematics that humans like?"
Ask Chuck Norris.
It's the only way to be sure
@@jmjones7897 The question was about humans, not about celestial deities such as Charles Ray Norris.
Most of math is just the count function iterated. So adding is just iterated counting, multiplying is iterated addition, exponentiation is iterated multiplication. Also these have an inverse, which is harder to do.
Is math discovered or invented?
I don't think we will ever know until we find another species that uses math. Like aliens or AI. In other words, maybe never.
Yes
@@StylishHoboMade me LOL. 😂
Both
I think that's a false dichotomy.
Math is based on classical physics. Infinitely small points are classical. They don't exist in QM. When you study what the math axioms say , they all use thinking in terms of Newtons classical world. So it is really a mirror of classical physics. QM is based on this math but if they develop a quantum math , it might work a lot better but currently no one is working on this. This kind of quantum math would have numbers that would be forbidden. Like between zero and one, there would not be a continuum of numbers, there were be gaps. There really has to be gaps because every single thing in the universe is a large wave function and it would have quantized everything, including allowed numbers. Zero cannot exist because of the uncertainty principle but in Newtons physics you can have zero. You just can't have an infinitely small point in math or any other thing because of the uncertainty principle. Your brain and thoughts and ideas are resting on top of a quantum wave function. They emerge from this complicated structure. Mathematical space is a mirror of real space. Numbers are the same as discreet particles. But we all know discreet particles don't exist , they are just high frequency waves in the quantum fields. True and false can co exist in a superposition as in a superposition of an electron that is either spin up or spin down. The philosophers forbid this but it really happens.
Chained derivatives: Position, velocity, acceleration, jerk, snap, crackle, pop. _Snap_ is important for railway cars pursuing curves, re clothoids.
Breakfast cereals are for eating, not applied mathematics.
All those words used after acceleration sound like someone with an odd sense of humour named them. I have never heard them. But I am also not from an English-speaking country. Clothoids sound made up. When I have time I will try to derivate and visualize this (and read it up).
@@andrewhotston983Axion laundry detergent for emergent fundable but not demonstrable physics. Chase neutrinoless double-beta decay, a statistical absurdity FUBAR. Add "dark matter" searches whose utter failures fuel more looks. Physics is a walk of shame where it contradicts thermodynamics - Hund's Paradox - look. The punctilious computer crunch was a molecule with near no barrier to racemization, HSSH. Try single enantiomer twistanone or especially D3-trishomocubanone. >C=O TO >CF2 is facile. Have QM invert a *pentacyclic* 11-carbon hollow ball of 8 chiral centers. Tunnel that.
Pentacyclo[6.3.0.0^(2,6).0^(3,10).0^(5,9)]undecane
The book O Universo em Retro Expansão, in Portuguese, speak about Scalar Domains. It suggest a table crossing all physic laws by their applicability. What do you think?
The philosophical laws of logic preceeds maths.
I thought so for a while, but if that were the case we would be able to use logic as a foundation for mathematics, we tried and failed
For me the universe seams fundamentally irrational. Logic and especially math can't accurately describe that system. But math is a far simpler language of description, even if it doesn't work well.
@@axle.student logic is a subset of math, it can't be the case that math (which includes logic) is simpler than logic
@@moussaadem7933 Logic is NOT derived from math. Logic is independent.
That being said there are many forms of logic.
No.
I know it's not really relevant, but I appreciate that you carefully avoid saying "begs the question" and instead come up with a new and clever way to say something like "leaves us with the question." Such a small detail is refreshing in a sea of UA-camrs who misappropriate the other phrase.
My main concern is how to survive all of these financial and political crisis, especially in light of the US political power scuffle. The government has really called things more difficult for its citizens, and we can't sit back and bear all the consequences of the bad governance.
I had a lot of financial crisis then, I can recall when I was homeless and faced with many things in life until $75,000 biweekly began rolling in and my Life went from A homeless nobody to a different person with good things to offer.
Wow, that is huge, how do you earn that much? I am 37 years old and have been looking for ways to be successful, please how??
I thank Laura Jennifer Reeves who has always been there to help me with detailed analysis and recommendations that I would not have had access to otherwise.
YES! That is exactly her name (Laura Jennifer Reeves). I saw her interview on CNN News and so many people highly recommended her and her trading skills❤️
This is exactly what i have been searching for, she has captured my heart. She just got a new client
“Thank you, Dr. Hossenfelder, for your thought-provoking video on language evolution and the unique periodicity of mathematics. Your insights into how languages follow distinct curves, while mathematics remains cyclic, really resonated. I think I’ve come across some observations aligned with your previous works, that could shed light on why mathematics might be inherently cyclic. I shared these thoughts by email-nothing earth-shattering, just some details linking your perspectives. I’d be honored if you have a chance to take a look.”
When Sabine spoke about virtually absent third derivatives of Time, I immediately thought of the Gravitational Field, which for a moving object is a third derivative of Time, but no one writes it down that way.
I thought about jerk, which is the third derivative of position with respect to time. It is important to keep the jerk low in motion control.
@@milasudril It's the same intuition: changing acceleration over time.
minor point. Zipf law was described by Estopel in 1912 long before Zipf. it was recdescribed in 1920 also before Zipf.
its status as a law is debated. it is an empirical finding without a known basis. it is not alone in this. Tsylors law and the species area laws in ecology are also widely applicable but have no know basis. the species area law seems to apply from puddles to continents which gives it a lot of credibility in ecology but without any idea why it applies
I understood absolutely nothing but I'm glad the maths nerds have something to be excited about.
That´s nice😊
Pink noise has 1/f fall of in spectral amplitude. 1/f fall off includes many diverse geophysical phenomena such as earthquakes and atmospheric conductivity to name but two. Earthquakes is a good example, large ones always occur less often than small ones, following a 1/f law. Words are discete and earthquake energy is continuous. Perhaps lanuage is a form of pink noise, or pink noise is more meaningful than we now think.
Is pink noise even a thing?
@andrewhotston983 Are you theorising that only white noise exists, and that there is always some additional filtering/amplification process involved that produces a noise power spectrum other than the flat one for white noise. Noise is always noise, i.e. not a thing other than noise, until we investigate and understand its ultimate origins.
@@jamesrarathoon2235 I believe pink noise is just white noise that's been mildly red-shifted.
I didn't understand any of that and I suspect you don't either.
Yep, aint got a clue.
Did you vote successfully?
@@u.v.s.5583 Did you bother to get up this morning?
In dynamics applications "jerk" (third derivative) is sometimes used as an effective cost function or as part of a regularization scheme. Control systems often work remarkably well if jerk is minimized. The point being that as sensors become more and more accurate, they become increasingly sensitive to disturbances and often require active control to get the most information out of their measurements. I suspect that higher order math will increasingly become more important in the future. It would be interesting to see if the law of numbers has changed over time as more sophisticated applications have been developed.