However Terrence is a full time researcher/prof. I believe it would have been more appropiate an estimate amount of hours put in becoming fluent. Perhaps it was a full week or something idk
I find it fascinating how this modern trend of proof formalization is moving mathematics more and more towards something resembling programming: similar to how a compiler checks for validity in your syntax and then executes your code, the proof system checks for validity in your proofs, and gives you a quasi-guarantee of correctness.
The proofs are actually encoded as types in a type system. Checking proof validity is actually just checking that a term is well typed, not dissimilar to type checking in a standard programming language.
In fact, Lean can be used both as a programming language and a theorem prover! It's sufficiently useful as a programming language that Lean itself is implemented in Lean and all the automation that Lean users use and write is also written in Lean. The nice thing about all of this is that for many of the systems in Lean, the developers of Lean only need to implement them once and they will work both for the programming language and the theorem prover.
The Curry-Howard correspondence isn't all that modern and calling the foundation of computer science a trend is a bit off the mark. Proofs and programs are mathematically equivalent. Types are propositions, proofs are programs, normalizing a proof is running a program.
🎯 Key Takeaways for quick navigation: 03:30 🧠 *Introduction to Machine-Assisted Mathematics* - Historical overview of machine-assisted mathematics and traditional methods. 05:51 🖥️ *New Modalities: Machine Learning Algorithms* - Machine learning for pattern generation using large databases. 09:00 🧩 *Satisfiability Solvers (SAT Solvers) and Applications* - SAT solvers explained with applications, complexity, and limitations. 11:17 🤖 *Large Language Models (LLMs) in Mathematics* - Potential of large language models, including limitations. 13:20 🔄 *Formal Proof Assistants* - Evolution of proofs using formal proof assistants. 19:42 🧩 *The Complexity of Hales' Proof* - Challenges and controversies in Hales' proof. 20:38 🖥️ *Challenges in Refereeing and Formalization* - Refereeing difficulties and controversies during formalization. 23:35 📉 *Peter Schulzer's Liquid Tensor Experiment* - Schulzer's collaborative effort in formalizing mathematics. 26:09 📚 *Benefits and Discoveries in Formalization* - Schulzer's formalization revealing errors and improving collaboration. 27:15 🔄 *Blueprint Approach in Formalization* - Use of blueprints aiding in formalization progress. 29:17 🌐 *Collaboration and Scalability in Formalization* - Formalization projects enabling large-scale collaborations. 31:56 🕰️ *Time and Effort in Formalization* - Current formalization time decreasing, aiming for widespread adoption. 37:05 🤖 *Machine Learning in PDE and Mathematics* - Increasing use of machine learning in solving mathematical problems. 38:02 🧶 *Machine Learning in Knot Theory* - Machine learning predicting knot signatures with high accuracy. 42:23 🤖 *GPT-4 Solving Mathematical Problems* - GPT-4 solving IMO problems, integrating with tools for reliability. 46:31 🤯 *GitHub Co-Pilot for Code Generation* - GitHub Co-Pilot automating code generation tasks. 50:08 🚀 *Future of AI in Mathematics* - Improving AI formalization times for future collaboration. Made with HARPA AI
maths is difficult field for most students specially in university, personally maths were really hard for me but even thought most of my teachers knew how bad my math skills were they still got the time to tutor me even thought other students were more proficient on the field and could keep up with better scores. when a professor see you pay so much attention you kind of gain the symphaty of them really fast. so i manage to pass with a c+ the statistics class and trigonometry as well.
I was interested in artificial intelligence and AI-generated proofs a few years ago but I dropped the idea. Back then I really thought category theory would help with this in a way that will reduce the amount of redundancy in mathematics. I can't help but feel that some famous theorems are redundant or have already been proved in some other logically equivalent statement. However, I don't know how to go about checking this, and categories made sense. I was learning about Haskell at the time for some applied category work, and could probably talk to Professor Riehl (sp) about this, but I haven't spoken with her in a while and I don't really know where to begin. I'd say start with theorems that we already know to be true, obviously, and ones that seem different but are categorically equivalent. For instance, when I was reading Introduction to Measure Theory, I asserted that the category of measurable spaces forms an Abelian category. I'm not even sure if that's true or if it makes sense, but could I create some sort of AI to answer this question for me? First I would need to tell the computer what an Abelian category is, but I can't let it use the definition from Wikipedia, because that has too much ambiguity, and we're trying to define a category independent of the notion of "set" so most programming languages cannot do this. I figured this is where "types" come from in programming but still, I haven't figured this out yet. I'm now interested in using Julia because it sounds prettier. There are some theorems that could possibly be proven if we were to create something in Julia (and maybe Haskell) but I would need to take an algorithms course, a course in Julia and a course in category theory before I can even prove/disprove what I'm saying. I really think algebraic geometry will benefit a great deal from these assisted proofs, because so much of it in higher dimensions is combinatorial, that we can check things with much better precision. For this reason, I began learning a bit of complex visualization and Galois theory in Julia, but nothing is conclusive yet, since I usually don't have the discipline to even sit down and create anything. But anyway, I will stop here and continue watching the lecture. I'm at 16:40 before my attention went elsewhere and I apologize for that. Let's keep going!
Every mathematical theorem is "redundant" because it follows logically from the axioms. So redundancy isn't necessarily bad in mathematics but indeed it is always wonderful to find connections between seemingly distinct areas of mathematics.
Okay, then that's not what I'm referring to, obviously. What I'm saying is that there are proofs that are logically equivalent, open and unsolved that have solutions and have already been proven but people cannot spot they are categorically the same as another proof. These proofs may have implications in applications to other fields and them being unsolved can be resolved by showing they are equivalent in some way to something easier and tractable. It's not just wonderful, it's useful to find connections between these areas. Specialization has made these connections harder to spot. @@ronald3836
If I'm not mistaken, the initial object of measurable spaces is the empty set, while the final object of measurable spaces is a 1 element set. Since they differ, the category lacks a zero object, so it is not an abelian category.
@@boywithacoin you need to get out the house if you think that is wild. Latex presentations are incredibly common, actually I am very surprised if a math presentation in academia is not made with LaTex.
In my mind this geometry would have to resemble the dependency graph Terry showed. Perhaps some classes of proofs (e.g. proofs by induction, proofs in euclidean geometry, etc.) would have a certain topological structure to their dependency graphs. Terry mentioned that these dependency graphs must be broken down into atomic pieces. Perhaps a proof in geometry using some triangle congruences would have all such atomic statements have even degree, just to give an example of some topological property they may have. This is really fascinating stuff and what a mathematical mind Terry has to even think of that.
what the heck, if it wasn't for you mentioning this I'd have lost it, so thank you. This statement alone generates a meta space for thinking that is absolutely fascinating
I wonder whether there is an area of active investigation into this as finding commonalities among these geometries might mean deducting dependency graphs that could open solution spaces within theory spaces@@sp_danger1729
As a Software Dev who likes advanced math it is nice to see practices used in Open Source Software Development cross pollinating into Mathematics. Collaboration with computer validation is the key to tackle the ever bigger challenges and searching for interconnections between the ever bigger vast subfields of Mathematics. No human can understand everything so the computer puts the ideas to test (There are only two options on the computer: either the program does what it is supposed to do or not), So large scale collaboration is necessary. In fact all of what we know and have is based on a ton of collaboration and building/expanding over what other people done (like we have in software dev: pull requests for bugs, forking projects that take original projects into a whole different trajectory etc.). Nice to see Mathematics moving forward.
Great perspective, it's nice to see how these LLMs will help us develop faster, etc., as they are (natural-language -> high-level language) translators, in the same way, an OOP language was innovate for being a (high-level -> assembly) translator, and of course assembly being (assembly -> binary).
@@vekmogo I see them as mostly hype as it is based on probabilistic models that regurgitate text they’ve saw online, It will never be able to figure coding or doing original math, Training an LLM on solved problems and then claiming it “knows” how to solve them is just nonsense. But for regurgitating documentation of software libraries or making some simple example code snippets for reviewing stuff you have forgotten are rather great. 😊
@@vekmogo By the way OOP is bad (Not in the original SmallTalk sense with its “live objects” paradigm but in the one the other languages implemented), Functional is better or something mixed like Rust or different like Zig. Abstractions can’t hide everything as they are always leaking.
@@PRIMARYATIAS I absolutely agree, they are just math and stochastics, overhyped for our investor overlords. And yes, you have an input signal and output signal pair, and they essentially look for a function that can map f(A)=B, that's all there is to it. People suddenly think they can “talk” to the computer. About the OOP reply, I also agree and I never claimed otherwise. Truth is, we won’t move from OOP for a while.
Currently mathematics is "exact" in theory but in practice everything depends on how much we trust the author and referees of a paper (or our own ability to verify all the details). There is really no excuse for foregoing on formal verification of all math results once advancements in tools eliminate the current excessive burden. And as a bonus we get a huge library of formal proofs from which AI will be able to extract general proof strategies.
very interesting insight. I wonder how deep Tao is into LLM training technology, I feel like his problem with chatGPT garbage ideas would be easily solved by having a formal language by the corpus on which the Ai is trained. the lean Textbases must have size by now that they allow for transformer analysis
He explains later that the hallucinations can be eliminated by coupling the LLM to a formal theorem prover in a feedback loop. The fact that formal math proofs can be verified by machine would seem to make math ideal for AI. In some decades (if not earlier) the role of human mathematicians will be reduced to deciding which mathematical structures and theorems are "interesting". And eventually AI will get good at that, too.
@@PRIMARYATIAS You can because you are on youtube, the people in the conference could not do it. Usually for a live presentation you do not fill your slide like that. But he is Terence Tao and it is better for him to not think about this details and work on mathematics
As well his slides should be. They stay up for a long time. It is traditional to have very many, very sparsely populated slides, which are each up for a very short time. This keeps the reader enraptured, necessarily studying each slide in the moment, with no chance to (a) zone out and get lost nor to (b) zone out, think about something, and rejoin. It is the right way to teach a class to college freshmen, and the wrong way for Terry to talk to his fellow math professors, who may well have something to _contribute_ . He _wants_ people to zone out and come back in, with well-informed comments. (Full disclosure: I've known Terry since '92)
"Automated provers could also be used to explore the space of proofs itself, beyond the small set of 'human-generatable' proofs that often require one to stay close to other sources of intuition, such as existing literature or connections to other ways of thinking." Love this
@54:00 "automating human intuition" makes no sense. Simulating creativity makes sense. The question is, how or why are human "creative leaps" not merely also simulations? The answer goes back to Nagel (among many illustrious predecessors) which is that intuition connotes subjective qualia, and you cannot simulate the raw qualia of "What is it like to be X?" for any X. You have to actually be X. But simulating creativity, while a useful tool, is not the same process, it is fundamentally constrained by prior information, while presumably human minds are not, we can think in abstract terms and originate entirely new ideas (albeit rarely). (Materialists would disagree, but... I won't give them the time of day.)
Chaitin, in his Metabiology evolution model, need oracles to filter the new "babies" Turing machines. Maybe Lean is that filter he needs to implement his model
I have been talking about this for a while now but nobody took me seriously, now when terrance tao talks about it suddenly everyone takes it seriously.
@@fredthechamp3475 not what I meant by this. Of course Terrance Tao has more credibility, it's just really annoying to be dismissed by amateur mathematicians (though more serious mathematicians I talked to took me seriously)
@@kingarth0r many, many, many people have talked about this. it has been considered for possibly centuries. it is not a unique idea and Tao having presented on it means little.
I think the analogy is apt, the extent to which people suck up to him in mathematical circles is insane, he can make the unfunniest joke ever and the whole room immediately bursts into sycophantic laugher
Except that it is called coq, pronounced the way he did, in French. It was named after its creator Thierry Coquand and the type theory formalism of Calculus of Constructions (CoC)
Pronouncing it “see oh queue” is just incorrect. If you want to avoid the word call it “Rocq”, which is the name the theorem prover is currently transitioning to.
Is it really called by that by most people, or are there merely a group of people that want to avoid their language being pronounced like cock? Seems there's too much benefit from calling it cock, in particular because it's funny. So a few people telling others to say "See Oh Queue" seems counter-productive
it does not seem like he knows very much about machine learning in this talk. i would be very interested to see what he would produce if he did. for instance, he mentions using a general purpose LLM to produce proof ideas, which we already know is not optimal because of the training data it uses, e.g. reddit.
First: If Terrence Tao tells you that a general purpose LLM helped him come up with an idea that solved his problem, who are you to tell him "No, it did not give you a useful idea? You hallucinated that." Second: there is a concept in Machine Learning called Transfer Learning wherein a machine gets better at task A because of training on task B. Learning how to read and predict computer programs, in particular, has been shown to transfer towards reasoning in general and would probably also help with mathematical reasoning. Third: Reddit and Stackoverflow are places where people discuss mathematics IN ENGLISH. This is helpful for when people wish to discuss mathematical proofs with machines. Finally: You are assuming that having "extra" information unrelated to maths (e.g. pop culture, history) somehow detracts from mathematical acumen but that is not necessarily true at all. It may simply be parts of the network that are unused when the mathematical work is being done.
I am sure he never expected ChatGPT to produce any non-trivial proof that wasn't already on the internet, but he knows that many people expect ChatGPT to be able to do everything, so he comments on that.
To quote his blog, "As part of my duties on the Presidential Council of Advisors on Science and Technology (PCAST), I am co-chairing (with Laura Greene) a working group studying the impacts of generative artificial intelligence technology (which includes popular text-based large language models such as ChatGPT or diffusion model image generators such as DALL-E 2 or Midjourney, as well as models for scientific applications such as protein design or weather prediction), both in science and in society more broadly."
It's so over for mathematicians. They will work hard to produce thousands of proofs in Lean or other language. Then someone is going to train an AI on this dataset, and they will be useless.
LOL the least part of all mathematicians is paid for proofing something, only those who work at universities. In banking, insurance and the industry they are in high demand and what they do can't easily be substituted by a machine. In fact mathematicians are the guys who drive this whole Machine Learning topic so better be concerned about the jobs of other people.
It doesn’t matter. Once ai can sufficiently replace intellectual endeavors it will do so with all. Computer science is full of fear mongerjng about each new technology taking jobs. Now it is LMs doing so. The problem is that with every new tool there is a need for engineers to use it appropriately. The goals of today are not those of tomorrow. It’s a moving target. Once this chain of new technology then new goals is broken and ai can self maintain better than when it is used by humans, every job will be replaced. It’s also totally unavoidable given how much of this tech is open source. Even if you banned it people would decentralize and the economic incentives are too vast. Would the United States like china to have a major upper hand? No, the technology will always progress regardless of the potential outcome.
@@JimCarrey2005 this day is far away. AI is ridiculously overrated. Since decades they tell us what it will be able to do but still what it does is just repeating things it was already told before. That's not intelligence. It's (machine) learning, but not intelligence. Intelligence is creativity, out-of-the-box thinking, rebellion against long-accepted dogmas. I don't see machines anywhere near that. The day I'll believe in AI takeover will be the first day a computer genuinely says to the user: "No I won't solve your silly problem, it bores me. Go figure it out yourself!" LOL
This could likely take a long time, current systems are not that smart and the spaces that are explored by mathematicians are often too high-dimensional. At least mathematicians will have to write the search functions to solve the problems effectively for possibly a long while.
@@sisyphus_strives5463 this is a bit goofy. Regardless of the initial comment, you probably shouldn’t say this… not only because you don’t know it, but we’re also just all human. Try to be kind.
Lean should quote Tao on their website: "It took me a month to learn"
For it to take Tao a month, how long does it take for us mere mortals?
@@lizzy1138 "Years of trials and tribulations"
@@lizzy1138 for everything new I learn I will forget two things that I wished I knew.
I was observing what kind of questions he was posting on Zulip. I'd rather say it took Terence Tao a month to become fluent in Lean.
However Terrence is a full time researcher/prof. I believe it would have been more appropiate an estimate amount of hours put in becoming fluent. Perhaps it was a full week or something idk
It is refreshing to hear a mathmaticion of Tao's reputation saying "This is very far from my area of expertise".
Can I kith your hawt mayowth
It's the hallmark of a serious academic.
@@pedrob3953 The smartest people know what they don't know.
Yeah we'll see about that in 2 more years. Human iqs are fixed AI on the other hand.....
Literally every mathematician ever lol
I find it fascinating how this modern trend of proof formalization is moving mathematics more and more towards something resembling programming: similar to how a compiler checks for validity in your syntax and then executes your code, the proof system checks for validity in your proofs, and gives you a quasi-guarantee of correctness.
Machine proofs in the calculus of constructions is programming. Dependently typed functional programming
in rigor computer science was derived from pure maths so its not surprise...in the end is all maths, or "pure language" (imo)
The proofs are actually encoded as types in a type system. Checking proof validity is actually just checking that a term is well typed, not dissimilar to type checking in a standard programming language.
In fact, Lean can be used both as a programming language and a theorem prover! It's sufficiently useful as a programming language that Lean itself is implemented in Lean and all the automation that Lean users use and write is also written in Lean.
The nice thing about all of this is that for many of the systems in Lean, the developers of Lean only need to implement them once and they will work both for the programming language and the theorem prover.
The Curry-Howard correspondence isn't all that modern and calling the foundation of computer science a trend is a bit off the mark. Proofs and programs are mathematically equivalent. Types are propositions, proofs are programs, normalizing a proof is running a program.
🎯 Key Takeaways for quick navigation:
03:30 🧠 *Introduction to Machine-Assisted Mathematics*
- Historical overview of machine-assisted mathematics and traditional methods.
05:51 🖥️ *New Modalities: Machine Learning Algorithms*
- Machine learning for pattern generation using large databases.
09:00 🧩 *Satisfiability Solvers (SAT Solvers) and Applications*
- SAT solvers explained with applications, complexity, and limitations.
11:17 🤖 *Large Language Models (LLMs) in Mathematics*
- Potential of large language models, including limitations.
13:20 🔄 *Formal Proof Assistants*
- Evolution of proofs using formal proof assistants.
19:42 🧩 *The Complexity of Hales' Proof*
- Challenges and controversies in Hales' proof.
20:38 🖥️ *Challenges in Refereeing and Formalization*
- Refereeing difficulties and controversies during formalization.
23:35 📉 *Peter Schulzer's Liquid Tensor Experiment*
- Schulzer's collaborative effort in formalizing mathematics.
26:09 📚 *Benefits and Discoveries in Formalization*
- Schulzer's formalization revealing errors and improving collaboration.
27:15 🔄 *Blueprint Approach in Formalization*
- Use of blueprints aiding in formalization progress.
29:17 🌐 *Collaboration and Scalability in Formalization*
- Formalization projects enabling large-scale collaborations.
31:56 🕰️ *Time and Effort in Formalization*
- Current formalization time decreasing, aiming for widespread adoption.
37:05 🤖 *Machine Learning in PDE and Mathematics*
- Increasing use of machine learning in solving mathematical problems.
38:02 🧶 *Machine Learning in Knot Theory*
- Machine learning predicting knot signatures with high accuracy.
42:23 🤖 *GPT-4 Solving Mathematical Problems*
- GPT-4 solving IMO problems, integrating with tools for reliability.
46:31 🤯 *GitHub Co-Pilot for Code Generation*
- GitHub Co-Pilot automating code generation tasks.
50:08 🚀 *Future of AI in Mathematics*
- Improving AI formalization times for future collaboration.
Made with HARPA AI
talk starts at 3:30
thank you. mr. tao speaks perfectly well!
the haters havent watched a single minute of this video.
Interesting. I tried to learn Lean a while ago and gave up.
I think I'll give it another shot :)
I learned the basics of it for a logic class I took at CMU. Then I took an actual logic class about model theory and gave up learning more
It took Tao a month, I reckon I could get it done in a few years.
Feels cool that I witnessed him learning the tool when i was also learning lean from the community 😅
I'm just a programmer with the highest math I ever took was Calc 2 but I found this fascinating.
What if we train an LLM to turn human-readable blueprints to technical formalisation into lean? Like a custom GPT
Is the volume on this video very low for anyone else?
Historical talk. This will go down as a turning point in mathematics
He's just presenting "the state of the art".
maths is difficult field for most students specially in university, personally maths were really hard for me but even thought most of my teachers knew how bad my math skills were they still got the time to tutor me even thought other students were more proficient on the field and could keep up with better scores. when a professor see you pay so much attention you kind of gain the symphaty of them really fast. so i manage to pass with a c+ the statistics class and trigonometry as well.
It seems to me like the people taking on individual Blueprint tasks are the modern equivalent to the human computers from the last century.
evocative =)
Future AI should be able to help with this.
I also had some moments in wich i thought i had found some self-resembling structure of the whole development Tao is describing.
I was interested in artificial intelligence and AI-generated proofs a few years ago but I dropped the idea. Back then I really thought category theory would help with this in a way that will reduce the amount of redundancy in mathematics. I can't help but feel that some famous theorems are redundant or have already been proved in some other logically equivalent statement. However, I don't know how to go about checking this, and categories made sense. I was learning about Haskell at the time for some applied category work, and could probably talk to Professor Riehl (sp) about this, but I haven't spoken with her in a while and I don't really know where to begin.
I'd say start with theorems that we already know to be true, obviously, and ones that seem different but are categorically equivalent.
For instance, when I was reading Introduction to Measure Theory, I asserted that the category of measurable spaces forms an Abelian category. I'm not even sure if that's true or if it makes sense, but could I create some sort of AI to answer this question for me?
First I would need to tell the computer what an Abelian category is, but I can't let it use the definition from Wikipedia, because that has too much ambiguity, and we're trying to define a category independent of the notion of "set" so most programming languages cannot do this. I figured this is where "types" come from in programming but still, I haven't figured this out yet.
I'm now interested in using Julia because it sounds prettier. There are some theorems that could possibly be proven if we were to create something in Julia (and maybe Haskell) but I would need to take an algorithms course, a course in Julia and a course in category theory before I can even prove/disprove what I'm saying.
I really think algebraic geometry will benefit a great deal from these assisted proofs, because so much of it in higher dimensions is combinatorial, that we can check things with much better precision. For this reason, I began learning a bit of complex visualization and Galois theory in Julia, but nothing is conclusive yet, since I usually don't have the discipline to even sit down and create anything.
But anyway, I will stop here and continue watching the lecture. I'm at 16:40 before my attention went elsewhere and I apologize for that. Let's keep going!
Every mathematical theorem is "redundant" because it follows logically from the axioms. So redundancy isn't necessarily bad in mathematics but indeed it is always wonderful to find connections between seemingly distinct areas of mathematics.
Okay, then that's not what I'm referring to, obviously. What I'm saying is that there are proofs that are logically equivalent, open and unsolved that have solutions and have already been proven but people cannot spot they are categorically the same as another proof. These proofs may have implications in applications to other fields and them being unsolved can be resolved by showing they are equivalent in some way to something easier and tractable.
It's not just wonderful, it's useful to find connections between these areas. Specialization has made these connections harder to spot. @@ronald3836
I'm not a bro. @@jessepowellr4
@@jessepowellr4 ur on the wrong video then blud
If I'm not mistaken, the initial object of measurable spaces is the empty set, while the final object of measurable spaces is a 1 element set. Since they differ, the category lacks a zero object, so it is not an abelian category.
Wonderful, very informative talk.
He uses the Boadilla latex theme, my favorite.. :)
making presentations with latex is wild
Very common @@boywithacoin
@@boywithacoin you need to get out the house if you think that is wild. Latex presentations are incredibly common, actually I am very surprised if a math presentation in academia is not made with LaTex.
@@holliswilliams8426 stop living in a bubble
Starts at @3:21
49:13 "The Geometry of the theories themselves" this is mind-blowing
In my mind this geometry would have to resemble the dependency graph Terry showed. Perhaps some classes of proofs (e.g. proofs by induction, proofs in euclidean geometry, etc.) would have a certain topological structure to their dependency graphs. Terry mentioned that these dependency graphs must be broken down into atomic pieces. Perhaps a proof in geometry using some triangle congruences would have all such atomic statements have even degree, just to give an example of some topological property they may have. This is really fascinating stuff and what a mathematical mind Terry has to even think of that.
what the heck, if it wasn't for you mentioning this I'd have lost it, so thank you. This statement alone generates a meta space for thinking that is absolutely fascinating
I wonder whether there is an area of active investigation into this as finding commonalities among these geometries might mean deducting dependency graphs that could open solution spaces within theory spaces@@sp_danger1729
@@clasanna88 Haha glad I could help😁
As a Software Dev who likes advanced math it is nice to see practices used in Open Source Software Development cross pollinating into Mathematics. Collaboration with computer validation is the key to tackle the ever bigger challenges and searching for interconnections between the ever bigger vast subfields of Mathematics. No human can understand everything so the computer puts the ideas to test (There are only two options on the computer: either the program does what it is supposed to do or not), So large scale collaboration is necessary. In fact all of what we know and have is based on a ton of collaboration and building/expanding over what other people done (like we have in software dev: pull requests for bugs, forking projects that take original projects into a whole different trajectory etc.). Nice to see Mathematics moving forward.
Great perspective, it's nice to see how these LLMs will help us develop faster, etc., as they are (natural-language -> high-level language) translators, in the same way, an OOP language was innovate for being a (high-level -> assembly) translator, and of course assembly being (assembly -> binary).
@@vekmogo
I see them as mostly hype as it is based on probabilistic models that regurgitate text they’ve saw online, It will never be able to figure coding or doing original math, Training an LLM on solved problems and then claiming it “knows” how to solve them is just nonsense. But for regurgitating documentation of software libraries or making some simple example code snippets for reviewing stuff you have forgotten are rather great. 😊
@@vekmogo
By the way OOP is bad (Not in the original SmallTalk sense with its “live objects” paradigm but in the one the other languages implemented), Functional is better or something mixed like Rust or different like Zig. Abstractions can’t hide everything as they are always leaking.
@@PRIMARYATIAS I absolutely agree, they are just math and stochastics, overhyped for our investor overlords. And yes, you have an input signal and output signal pair, and they essentially look for a function that can map f(A)=B, that's all there is to it. People suddenly think they can “talk” to the computer.
About the OOP reply, I also agree and I never claimed otherwise. Truth is, we won’t move from OOP for a while.
Currently mathematics is "exact" in theory but in practice everything depends on how much we trust the author and referees of a paper (or our own ability to verify all the details).
There is really no excuse for foregoing on formal verification of all math results once advancements in tools eliminate the current excessive burden.
And as a bonus we get a huge library of formal proofs from which AI will be able to extract general proof strategies.
Terry Tao, the only public speaker who speaks fast enough
very interesting insight. I wonder how deep Tao is into LLM training technology, I feel like his problem with chatGPT garbage ideas would be easily solved by having a formal language by the corpus on which the Ai is trained.
the lean Textbases must have size by now that they allow for transformer analysis
> solved by having a formal language by the corpus on which the Ai is trained.
What formal language are you referring to?
@@jtpmath lean or coq, like he says in the talk.
It might help a bit, but mathematical proofs require a lot of very creative thinking, which ai is really bad at@@angrymurloc7626
He explains later that the hallucinations can be eliminated by coupling the LLM to a formal theorem prover in a feedback loop.
The fact that formal math proofs can be verified by machine would seem to make math ideal for AI. In some decades (if not earlier) the role of human mathematicians will be reduced to deciding which mathematical structures and theorems are "interesting". And eventually AI will get good at that, too.
@@ronald3836 maybe, but LLMs are not there yet and it might take a while. We forgot how long the AI winter lasted before we got here.
What a great lecture!
Will the Q & A be posted here ?
Ah yes the OEIS. You know you're getting somewhere when your line of investigation produces relevant sequences that aren't found on it.
Excellent!!!!!!!!
Wildly interesting 🎉
Excellent talk Professor but the the PowerPoint presentation is packed with text.
I actually paused the video and read every single one of them, For me it was a bonus point, Not a negative one.
@@PRIMARYATIAS You can because you are on youtube, the people in the conference could not do it. Usually for a live presentation you do not fill your slide like that. But he is Terence Tao and it is better for him to not think about this details and work on mathematics
As well his slides should be. They stay up for a long time.
It is traditional to have very many, very sparsely populated slides, which are each up for a very short time. This keeps the reader enraptured, necessarily studying each slide in the moment, with no chance to (a) zone out and get lost nor to (b) zone out, think about something, and rejoin.
It is the right way to teach a class to college freshmen, and the wrong way for Terry to talk to his fellow math professors, who may well have something to _contribute_ . He _wants_ people to zone out and come back in, with well-informed comments.
(Full disclosure: I've known Terry since '92)
That's a beamer slide, not a PowerPoint.
Beamer slides with too much text are an absolute staple of every mathematics talk or conference.
What an absolute treat. Wow!
How about HMI for factory machine
amazing content!
This talk is for Math people to appreciate. We, the commoners will have a hard time appreciating his brilliance !!!
let's goooo! san francisco swimming!
This stuff gets me REALLY excited!
Man i hope ai doesn’t replace us , i would be more than happy to have it write proofs alongside us and help us tho
Superb!
Excellent talk!
fascinating talk
I was challenged to a drinking game where you had to take a shot every time Terence says "Uhm".
I got severe alcohol poisoning before the 04:37 mark
He's not great at giving presentations tbh. No-one really cares though as he is so good at research.
@@holliswilliams8426 Yeah, I have nothing but respect for the guy, he's clearly a genius. He's just got a funny quirk when speaking.
Kind thank you.
Starts at 3:34
Alternate speach:
You all know who terence tao is, so here he is.
Amazing talk...!
One more chance for decent audio levels went down the drain...
"Automated provers could also be used to explore the space of proofs itself, beyond the small set of 'human-generatable' proofs that often require one to stay close to other sources of intuition, such as existing literature or connections to other ways of thinking."
Love this
i clicked here thinking it was "Machine assisted Prof" Thinking they made some kind of Artificial Homuncuclus
Awesome! 😊
Metamath?
great talk!
I thought I enabled 2x but checked no and I think he still thinks his mouth is not fast enough for his brain XD
mathematicians discovered git
@54:00 "automating human intuition" makes no sense. Simulating creativity makes sense. The question is, how or why are human "creative leaps" not merely also simulations? The answer goes back to Nagel (among many illustrious predecessors) which is that intuition connotes subjective qualia, and you cannot simulate the raw qualia of "What is it like to be X?" for any X. You have to actually be X. But simulating creativity, while a useful tool, is not the same process, it is fundamentally constrained by prior information, while presumably human minds are not, we can think in abstract terms and originate entirely new ideas (albeit rarely). (Materialists would disagree, but... I won't give them the time of day.)
27:10
42:00
Chaitin, in his Metabiology evolution model, need oracles to filter the new "babies" Turing machines. Maybe Lean is that filter he needs to implement his model
thought it was machine assisted prof😂
Counting the uhhhhhh 879!
I have been talking about this for a while now but nobody took me seriously, now when terrance tao talks about it suddenly everyone takes it seriously.
Why should anyone listen to you?
Lil bro thinks he is on the same level as Terence Tao 😂
@@fredthechamp3475 not what I meant by this. Of course Terrance Tao has more credibility, it's just really annoying to be dismissed by amateur mathematicians (though more serious mathematicians I talked to took me seriously)
@@kingarth0r many, many, many people have talked about this. it has been considered for possibly centuries. it is not a unique idea and Tao having presented on it means little.
Bright mind but I cant listen to him ehm-ing after every sentence hes spoken…
What’s the joke about not even god can make a triangle with less than 180 degrees?
0:01
Professor Tao is very popular at parties
No doubt, hes one of the greatest mathematicians the world has ever seen. Hes basically Lebron in academic circles.
I think the analogy is apt, the extent to which people suck up to him in mathematical circles is insane, he can make the unfunniest joke ever and the whole room immediately bursts into sycophantic laugher
amazing
16:12 aah Terence its called "See Oh Queue" (COQ) and not the word you said.
a friend of mine wanted to start a band called cocks in the machine :3 it has such a nice ring to it that word
Except that it is called coq, pronounced the way he did, in French. It was named after its creator Thierry Coquand and the type theory formalism of Calculus of Constructions (CoC)
@@yinweichen doesn't matter. In anglo-sphere it is used as a horrible innuendo , and has been used for unwanted harassment, even in academic circles.
Pronouncing it “see oh queue” is just incorrect. If you want to avoid the word call it “Rocq”, which is the name the theorem prover is currently transitioning to.
Is it really called by that by most people, or are there merely a group of people that want to avoid their language being pronounced like cock? Seems there's too much benefit from calling it cock, in particular because it's funny. So a few people telling others to say "See Oh Queue" seems counter-productive
❤
Sad to see mathematics falling into the realm of the phantasmagoric
Dumb questions at the end
um um um um um um um um um
blahblahblah UM blahlblahlbahlblahblah UM blah UM blahblah UM
it does not seem like he knows very much about machine learning in this talk. i would be very interested to see what he would produce if he did. for instance, he mentions using a general purpose LLM to produce proof ideas, which we already know is not optimal because of the training data it uses, e.g. reddit.
if humans trained on messy data can produce rigorous proof, so can llm
First: If Terrence Tao tells you that a general purpose LLM helped him come up with an idea that solved his problem, who are you to tell him "No, it did not give you a useful idea? You hallucinated that."
Second: there is a concept in Machine Learning called Transfer Learning wherein a machine gets better at task A because of training on task B. Learning how to read and predict computer programs, in particular, has been shown to transfer towards reasoning in general and would probably also help with mathematical reasoning.
Third: Reddit and Stackoverflow are places where people discuss mathematics IN ENGLISH. This is helpful for when people wish to discuss mathematical proofs with machines.
Finally: You are assuming that having "extra" information unrelated to maths (e.g. pop culture, history) somehow detracts from mathematical acumen but that is not necessarily true at all. It may simply be parts of the network that are unused when the mathematical work is being done.
I am sure he never expected ChatGPT to produce any non-trivial proof that wasn't already on the internet, but he knows that many people expect ChatGPT to be able to do everything, so he comments on that.
Ridiculous to believe Terence Tao would give a talk about ML without understanding the mathematical details of ML
To quote his blog,
"As part of my duties on the Presidential Council of Advisors on Science and Technology (PCAST), I am co-chairing (with Laura Greene) a working group studying the impacts of generative artificial intelligence technology (which includes popular text-based large language models such as ChatGPT or diffusion model image generators such as DALL-E 2 or Midjourney, as well as models for scientific applications such as protein design or weather prediction), both in science and in society more broadly."
Welcome to Dumb Mathematics
????????????????????????
this guy may be good at maths but his language abilities and his presentation abilities are worse than that of a 12 years old
Stupid, that action is the sign of genius
His presentation skills are mediocre, but obviously you can get away with it when you are that good at research.
absolutely boring, there is no Terence Tao in this presentation
His mouth can't keep up with his brain.
It's so over for mathematicians. They will work hard to produce thousands of proofs in Lean or other language. Then someone is going to train an AI on this dataset, and they will be useless.
LOL the least part of all mathematicians is paid for proofing something, only those who work at universities. In banking, insurance and the industry they are in high demand and what they do can't easily be substituted by a machine. In fact mathematicians are the guys who drive this whole Machine Learning topic so better be concerned about the jobs of other people.
@@NoSpeechForTheDumb yes you are correct, i was referring to mathematicians at universities specifically
It doesn’t matter. Once ai can sufficiently replace intellectual endeavors it will do so with all. Computer science is full of fear mongerjng about each new technology taking jobs. Now it is LMs doing so. The problem is that with every new tool there is a need for engineers to use it appropriately. The goals of today are not those of tomorrow. It’s a moving target. Once this chain of new technology then new goals is broken and ai can self maintain better than when it is used by humans, every job will be replaced. It’s also totally unavoidable given how much of this tech is open source. Even if you banned it people would decentralize and the economic incentives are too vast. Would the United States like china to have a major upper hand? No, the technology will always progress regardless of the potential outcome.
@@JimCarrey2005 this day is far away. AI is ridiculously overrated. Since decades they tell us what it will be able to do but still what it does is just repeating things it was already told before. That's not intelligence. It's (machine) learning, but not intelligence. Intelligence is creativity, out-of-the-box thinking, rebellion against long-accepted dogmas. I don't see machines anywhere near that. The day I'll believe in AI takeover will be the first day a computer genuinely says to the user: "No I won't solve your silly problem, it bores me. Go figure it out yourself!" LOL
This could likely take a long time, current systems are not that smart and the spaces that are explored by mathematicians are often too high-dimensional.
At least mathematicians will have to write the search functions to solve the problems effectively for possibly a long while.
Off topic: socialism is bad, don't go this way
How so.
@@justliving7970 ?
@@TymexComputing How is socialism bad?
he talks very annoyingly
he's worth more to humanity than your entire family tree
@user-cd6cb6om3h
Wow what a well thought out critique, brilliant mind you must have to understand the material so completely.
I think he’s a good lecturer, I’ve sat in during his complex analysis class.
Interesting, I actually really enjoy hearing him speak
@@sisyphus_strives5463 this is a bit goofy.
Regardless of the initial comment, you probably shouldn’t say this… not only because you don’t know it, but we’re also just all human. Try to be kind.