Hey cool, my tactic (itauto) got featured on computerphile! I usually read it as "i-tauto" rather than "i-t-auto", it stands for "intutionistic tautology". There is also a "tauto" tactic which does classical tautologies but itauto produces nicer looking proofs which probably made it a better candidate for this demonstration.
Yep. For anyone wondering what intuitionistic logic has to do with nicer looking proofs: Lean is based on lambda calculus, which you can see in those printouts at the end, and the "natural" proof language for lambda calculus is intuitionistic logic (this is usually known as the Curry-Howard correspondence). For the law of excluded middle you just have to add it as an axiom and apply it like a theorem (and you can, funnily enough, use itauto to do that, leading to "itauto!" which is "itauto but try harder" i.e. not really intuitionistic). By contrast, "tauto" is more like a classical SAT solver, it turns everything into CNF eagerly and this generates a horrible looking proof term.
Very interesting video. But Sean, I think you should interrupt your guest more often, especially during more dense explanations like the one that starts at 10:29. I feel that if this was on Numberphile, Brady would've interrupted prof Altenkirch several times, making him explain in simple, ELI5-like terms what do "assume", "apply" and "exact" keywords _mean_.
My guesses: "assume" assumes that the current remaining left-hand side is true, so you only focus on the right-hand side. with "apply" you express the remaining right-hand sides with previous assumptions. "exact" I don't exactly know
assume - Like a formula that has a "goal", or desired output. (A question) apply - Like putting arguments into the formula, but those arguments have to be compatible. (things that are true, that are supposed to help answer the question) exact - Returns the formula that satisfies the goal, based upon the given arguments. (formula that is true)
Im actually being taught by this professor at the University of Nottingham currently and its for ACE (algorithms, correctness and efficiency). This module touches upon LEAN. So here's what the keywords do assume - assume that a premise is true e.g. P → Q We assume P so now our goal is to prove Q apply - this means to apply an assumption e.g. (P → Q) → (Q → R) → P → R So firstly, we need to use assume to assume the implications given to us so we have assume pq, assume qr, assume p, Now we are left to prove R So we use apply finally! We write: apply qr, This is because in our assumption qr, R holds true if Q holds, proving it. Now we need to prove Q Well we apply pq of course so we write apply pq, this is because when p holds, q holds, thus proving q. now we have to prove p, well now that brings us to our next keyword "exact". Exact is used when you have a premise that is "exact"-ly the same as your goal. So we exact p Here is the full code: theorem C : (P → Q) → (Q → R) → P → R := begin assume pqr, -- assume left side P → Q assume qr, -- assume new left side Q → R assume p, -- assume leftover variable from last assumption p apply qr, -- apply qr because R holds, if Q holds (if we are at the zoo we are happy) apply pqr, -- apply pqr because if P holds Q holds (if it is sunny, we go to the zoo) exact p, -- p holds in this context which matches our original theroem end
I know you are joking, but: you can only assume what is on the left side of the implication arrows in the statement you want to prove. In this example there are three arrows, so he assumes three things. Also, the order in which you make the assumptions is important, since the inner assumptions have to be embedded in the outer ones, and you cannot apply statements which are out of scope.
@@douro20 Also, Schwarzenegger sexually assaulted a lot of women (he admitted to inappropriate behavior in 2018, after being accused back in 2003 by numerous women). I have no reason to think this guy is the same.
this is proof that it helps having a script. all those excursions about who held a talk when and where and how interesting it was: nice. does not help if you do not break down the basics first.
Hopefully someone can comment with a video giving the joke in detail. Claim: All horses are the same colour. Actually we will show something stronger - every (finite) *collection* of horses are all the same colour. We argue by induction on the size of the collection. Our base case is when there is one horse in our collection. Since there is just one horse, clearly it's the same colour as every other horse in the set. Now we prove the induction step: that if every collection of n-1 horses all have the same colour, then every collection of n horses all have the same colour. So, take any n horses. Pick your favorite from the horses, say A. If we remove A from the set of horses, we have n-1 horses left. By the induction hypothesis, all are the same colour. Now pick another horse, B (not A). Put A back in the collection, and take B out. We again have n-1 horses left, and so again they are all the same colour. So A is the same colour as all the rest of the horses, and B is the same colour as all the rest of the horses. So all n of the horses are the same colour, and we've finished our inductive proof. There are only finitely many horses that have ever lived, so that's a finite collection. Hence all horses have the same colour. Challenge: find the mistake.
Here's the argument made: Consider the statement "forall n, a group of n horses has the same color". We'll prove this by (strong) induction on n. In the case n = 1, our group of horses only has one horse, so clearly the whole group has the same color. Now assume the statement "for any n
@@ArtArtisian But by choosing any A ("your favourite horse") in the set S (S has n horses, where n>1) we are assuming that A is arbitrary, that is that the induction step would work for any horse chosen: A is just a label for that choice. The induction step that says the other n-1 horses are all the same colour is ok, but of course A itself might be a different colour! When we later pick B!=A, we are violating the assumption that the choice is arbitary, because the second choice B has some assumptions added to it other than the fact that it lives in S.
@@gregoryfenn1462 No, that's not a problem. There is no reason why the choice B has to be arbitrary. A is arbitrary and B is arbitrary but not A which is perfectly valid and doesn't violate any "assumption that A is arbitrary". In fact, there isn't even a reason why A has to be arbitrary. You could as well say it has to be the youngest or biggest horse and B is the opposite. That part of the proof would work out all the same. The part where it falls down is that you won't have any other horses left to form the connection (i.e. be in both sets) if you only have 2 horses, so the induction step doesn't work only specifically in that case. But ofc that alone is enough to break the whole proof.
I would argue that Professor Brailsford is better, but maybe it's just his very relaxing voice and intimate knowledge of early computing. Altenkirch is extremely knowledgeable as well, but his voice is just not as nice to listen to. Sorry professor!
Logic is a very complex topic, for something that is so simple. And it complex because it is so easy to make a minor mistake in the chain, and that invalidates the entire chain.
I wonder if Large Language Models coming from machine learning will help with mathematics research. I could see a mechanism where an LLM suggests a proof and Lean checks it. Someone must already be thinking of this.. right?
see open AI's blogpost Solving (Some) Formal Math Olympiad Problems where they do something like this to write Lean proofs for mathematical olympiad problems.
You'd somehow think some of the utility of the program is rather niche but I've certainly used it to simplify quite complex systems of equations, which is a thing that comes up a weird number of times when you're solving various things and you need to create an easily solvable system of equations that does not use any previous variable so that you can easily run it on a computer.
Ultra high quality proofs, I expected nothing less from a German Professor of Computer Science. Also, this whole video was an 'r/whoosh' moment for me. 🤯
Does anyone who doesn't already know this stuff follow any of this at all? I mean, I follow the logic of his proof, but the Lean language is totally opaque to me. I have no idea what any of it is doing. Why do the "assume" statements grab the parts of the theorem that they do? (In particular, why does the third "assume" grab "P" and not "¬P"?) What does "apply" and "exact" mean/do?
Well, it's ¬P that you want to prove, so you can't just assume it. In a goal (i.e. "statement that you want to prove") of the form `P → Q`, you can use `assume` to give the assumption `P` a name, so that you can refer to it in your proof. Since you've given `P` a name, this turns your goal into `Q`. E.g. here, `assume h` assigns the name `h` to the assumption `P`. In Lean, `¬P` is defined as `P → False`. You can convince yourself that these are equivalent with a truth table if you'd like. Since `¬P` is just `P → False`, `assume` will assign a name to the assumption `P` that is hidden in `¬P = P → False`. If your goal is `Q`, then you can use `apply` with an assumption `P → Q` (that you reference by the name you gave it) to transform the goal into `P`. E.g. `apply h` will turn your goal into `P` if `h` is the name of an assumption `P → Q`. In natural language, you can read this as "In order to prove `Q`, it is sufficient to prove `P`, because we know that `P → Q`". `exact` is even simpler: If your goal is `P`, then an assumption `P` will prove it. So you just give `exact` the name of the assumption `P` and it will conclude the proof, e.g. `exact h` will conclude the proof for a goal `P` if `h` is the name of an assumption `P`.
@@sqrooty I appreciate your response, but I still have questions. Why does the first "assume" grab a whole assertion (P→Q) and not just P? Later on, it grabs just a single premise, even though it's the beginning of an assertion. Why can't I assume ¬Q? Apparently I have to "apply" it before it actually *does* anything. The "assume" statement doesn't really seen to do anything other than assign variable names. And why do I need those names anyway? There are already abstract names for the premises and assertions. None of this makes sense to me. I have to assume three things, but then I have to "apply" two of them, and then "exact" the third. But I didn't really make any decisions in there. I assigned names to three things, that the computer told me I needed to assign names to, and then applied the assumptions, but one of the applications was "exact". Were the others inexact? I should probably go read a real tutorial. I'm glad Prof. Altenkirch wasn't my professor at school. I prefer it when my teachers actually teach things. "1+2=3, yeah? Well, you assign 'a' to 1 and 'b' to 2, and 'c' to 3, and then you assume 'a' and 'b' and apply 'plus', and then you 'exact' c! It's totally clear!"
@@wbfaulk > Why does the first "assume" grab a whole assertion (P → Q) and not just P? Because `(P → Q)` is the assumption of `(P → Q) → R`, where R is some arbitrary proposition. You don't know that P is true, you just know that P implies Q. > Why can't I assume ¬Q? This is exactly what `assume nq` does in that proof. > And why do I need those names anyway? One answer: Because referring to the full propositions by name is more comfortable. Why do you write "By lemma N" in a mathematical proof, and don't write out the full lemma statement every time? There are other sensible answers to this question, and there are ways to write proofs without assigning a name to every assumption. But this is out of scope for the basic proof shown in the video. If you'd like a technical answer: Lean implements a form of natural deduction, where each proposition has associated "introduction" rules (rules that let you construct a proposition) and "elimination" rules (rules that let you do something with a proposition). `assume` is just the introduction rule of implication. > Were the others inexact? It's called `exact` because you give it *exactly* the assumption that proves the statement. `apply` on the other hand takes an assumption `P → Q` and turns a goal `Q` into a goal `P`. > I should probably go read a real tutorial You might benefit from reading an introduction to mathematical proof first. Arguments of the form "Let ε > 0, let x ∈ ℝ, so ...." are very common in mathematics as well, except that here we effectively write `assume` (using "let") without a name. Something of this nature works in Lean as well, by the way - You don't have to give every assumption a name, but you still need to write `assume` ("let") at the start. > I prefer it when my teachers actually teach things. I wouldn't speak so harshly about the exposition in the video. Perhaps Altenkirch is just targeting folks with a different mathematical maturity from yours. I agree that this introduction isn't amazing, though.
@@sqrooty Coming from a math and CS background, his introduction was just fine tbh, it's a bit sad that he gets so much hate. Perhaps Computerphile should add a note about the target audience, although I understand this may not be ideal.
I have a question: The method of proof presented here seems quite counter-intuitive to humans. The "human-style" proof would be to assume (P->Q), assume (not Q) and assume P, then use the Modus Ponens logical rule on (P->Q) and P to deduce Q, and then find that both Q and not Q hold, a contradiction. Instead in this video prof. Altenkirch presents us with the fact that (not Q) is equivalent to (Q -> False) and uses "apply Q", which in my opinion is very counter-intuitive and not very helpful for understanding. Do you always have to think this "reverse" way when using lean?
The issue here is you can't directly assume P, because you're not given P as a hypothesis: you're only given "P -> Q" and "not Q". What you really want to do is say "if I *were* given P as a hypothesis, that would lead to a contradiction"; in other words, you want to prove "P -> False". The way to do this is straightforward: Given a proof of P, use the "P -> Q" hypothesis to get a proof of Q, and then the "not Q", or "Q -> False" hypothesis to get a contradiction (i.e. "a proof of False"). But this is exactly what "apply Q" does. As for reasoning in the reverse, that's generally a feature of the tactic language you find in proof assistants. It allows you to focus on the goal ("not P") and apply various reasoning steps to get the goal closer and closer to a hypothesis until you're done. Lean does allow you to reason "forwards" by directly constructing the proof term you want: in this case it would be "exact nq (pq p)".
@@rhaegav-targaryen Thanks for the detailed answer, I understand it much better know. But still, I would argue that the reasoning presented here in the lean program is "backwards" and non-intuitive. For example, in terms of the example given at 7:55, the way the lean program is written translates to: "I know the rule that whenever the sun shines we go to the zoo" (assume P->Q) "Assume we don't go to the zoo, I have to prove the sun does not shine" (assume not Q) "Assume the sun shines, I have to prove a contradiction" (assume P) "For a contradiction, it would be enough to prove that we go to the zoo" (apply not Q) "For a contradiciton, it would be enough to show that the sun shines" (apply P->Q) "But we know that the sun shines, so we're done" (apply P) I would argue that the first three lines are intuitive, but the last three lines are exactly in the wrong order to be easily understood.
@@XBrainstoneX Yup! You're right, the way the Lean program is written is considered "backwards" reasoning. Whether its more or less intuitive than its forwards counterpart is probably subjective; I think I agree with you that for simple examples like this, forwards reasoning feels more natural. But with more complicated proofs it becomes necessary to use the tactic language rather than directly constructing a proof term (e.g. with "exact"), and the tactic language naturally manipulates the goal (i.e., reasoning backwards trying to get the goal to match one of your hypotheses) rather than manipulating the hypotheses (reasoning forwards, trying to get a hypothesis that matches the goal), and that's probably because while there may be many hypotheses, there's usually only one goal in focus at a time. Its maybe worth mentioning, though, that Lean _does_ let you directly manipulate the hypotheses as well, with the "at" keyword (for example, saying "rw X at H" where X has type 'a = b', and H is a hypothesis containing the term 'a', and 'rw' stands for 'rewrite')
Perhaps the way to look at it, is that automated proofs are still in the early stages of development. Like assembly once was the state of the art in computer programming. Simple calculations, obvious to us, took pages and pages of code. Of course, 'counterintuitive' is not about the detailedness of the process, but just to highlight that this is an early development. Perhaps in the future, automated proofs will be presented differently, and more in line with our intuition.
12:17 : why here it's an equal symbole ? what it's the diff beetween "non(P) = (P => False)" and "non(P) => (P => False) ? maybe by = you mean , no ? (what is the diff beetween and = btw ? I would say "=" compare values , and compare proprieties, but not sure)
Yeah but that only works for discrete horses; as soon as any one horse looses a limb, the whole team of horses will fall apart (a terrible chore to clean up after).
So you can't use meaningful variable names in this language? It looks like it's making special interpretations of "n" as a prefix and the fact that "p" and "q" are consecutive. ... have to say at the end that the explanation here was rather poor. I worked on a very simplistic theorem prover (convert axioms to DNF, invert theorem, apply modeus ponens till contradiction or no more unifiable statements) in college because a textbook discussed them and they looked interesting, and even given that experience (surely more than the expected target audience) I feel like I have no idea what any of that syntax means or is trying to do, or therefore what the video is trying to get across.
@@wbfaulk The interpeter knew that the thing to prove was not q implies not p. When the user does an assume, it assigns the hyptothesis, not q, to the variable the user entered, and what remains to be proved is the conclusion, not p. Whatever name the user would have entered, it would still have been assigned not q. nq is just a memorable name, but it could be called anything.
@@redjr242 He has three "assume" statements, without an explanation of what they are doing. I don't doubt that Lean is doing the right thing. It's just that this video doesn't explain it at all. I'm glad I'm not relying on Professor Altenkirch (at least as edited here) teaching me anything I need to know.
The use of symbols like that reminds me a bit of APL which is a rather difficult language to learn due to its use of non-intuitive symbols for most of its basic functions.
In general, practically all the notation used in Lean is standard mathematical notation. There's no funky symbols that are specific to Lean like there are in APL, for example.
@@sqrooty I think in the category theory library they use like, a longer arrow symbol in order to give the type of morphisms between two objects, because they can’t use the same Unicode character as they use for the type of functions between two types ? Though I last looked at that a couple years ago, so I could be wrong, or it could have changed (I know that the category theory library had a substantial change after I started my incomplete effort to use it. In retrospect it may have been a bad choice of a first thing to try to use in Lean )
@@drdca8263 Yeah, the category theory library was certainly one of the harder parts of mathlib to use. And you're right, there's still the occasional funky symbol here and there, but I'd hope that this doesn't happen at the level of this video yet :)
7:35 : why they are not necessary ? is, and if yes, why the expression ((p => q) => non(q) ) => non(p) equal the expression (p => q) => (non(q) => non(p)) ?
> why the expression ((p => q) => non(q) ) => non(p) equal the expression (p => q) => (non(q) => non(p)) It's not. Implication isn't associative. But the brackets are not necessary, because in Lean, the implication operator `→` is by default what we call "right associative", i.e. `a → b → c` will parse as `a → (b → c)`. This is just for convenience and in line with the notation of functional programming languages (if `→` refers to the function arrow, not implication).
without even watching this I can deduce that no one has yet figured out how to make a mathematical proof AI . If and when we do we will be instantly left in the dust unable to understand the proofs.
imagine of some math profesesors spend their entire life to prove one theory... and this guy proves 65,000 theories with the computer in like 1 year LOL
It would be interesting to understand some use cases of when to use something like Lean and when to use something like Agda - is it that Lean is more proof assistant with some functional programming and Agda is more programming language with some proof assistant or…? Time to do some digging.
Lean 4 is a fully-fledged programming language and self-hosting (i.e. it is implemented in itself). The differences between Lean and Agda are mostly in terms of the type theory they use, what convencience features they provide and the way that each community writes proofs (you'll know what I mean if you look at proof scripts in both).
Remember filling multible worths of whiteboards during my uni days many years ago solving these formal proofs. Would have killed for having this IDE plugin!
Yes. It makes some choices which are more convenient for formalizing math. Iirc it has quotient types built in to the type theory, whereas Coq I think needs to use setoids instead?
@@drdca8263 I think the Coq people take offense with this, because you can simulate quotients using another Coq feature :) In terms of the raw type theory, Coq and Lean are very similar. It's mostly the elaboration layer (i.e. the layer that converts user input into type-checkable terms) where they differ. Unfortunately, I don't know enough Coq to give a fair account.
I *really* don't like tactics. lambdas are way more familiar to me as a functional programmer (and as a mathematician), especially because it's quite hard to follow the types of expressions using tactics. here's my solution to the challenge in swift: typealias Not = (A) -> Never func challenge(_ fun1: @escaping (T) -> Not, _ fun2: @escaping (@escaping Not) -> T) -> Never { let tFromNothing = fun2{assumeT in let notT = fun1(assumeT) return notT(assumeT) } return fun1(tFromNothing)(tFromNothing) } I only wish Swift had a type system suitable for more complex proofs, because I much prefer this syntax.
I've never seen Lean before, so my question is: I don't understand the syntax of the proof, what do the expressions pq, nq and p mean? Are they a different notation for P -> Q, !Q and P? (Sorry, I don't know how to type the mathematical logic symbols) If that is so, then why use two different notations? What happens if you name a variable N, does nq then mean !Q or N -> Q? Or are the expressions just place-holders and are by some rule related to parts of the example?
I have not used Lean and I don't really know much about logic. But it looks like that are just variable names for the next part of the input. Basically "placeholder" names.
If the language just lets you assume things, what prevents you from assuming an illogical thing? (I tried to test this on my own, but apparently you can't just open a file and enter code. You have to first create a directory to contain your file with a utility called "leanproject", but the binary download doesn't have a command named "leanproject", so I then tried to install it via the VSCode extension, which pretty much just downloaded the same thing and still didn't include "leanproject", and after several minutes of trying to figure out where this mystery command came from gave up and deleted everything.)
That's the cool thing. You can assume anything you want in the declaration of a theorem! An implication is basically a function. Just as you can define a function to have any input, you can define an implication to have any hypothesis, even false! You won't be able to prove 2+2=5 on its own, but you can prove a theorem that assumes 2+2=5 and concludes something else. In fact, a theorem that assumes false can conclude any proposition. It's called the principle of explosion.
The conclusions you come to using an assumption are only valid (or true) if the assumptions are valid (or true). It's really no different than real life: if you assume vaccinations cause Bad Thing™, then we shouldn't be vaccinating... but vaccinations don't cause Bad Thing™, so that conclusion is useless.
Right, and my point is if this is a tool to hold a proof to rigorous standards, in hopes of finding an invalid conclusion hidden somewhere in the middle, what's to prevent that invalid conclusion from being hidden in the things that you've told it to assume?
@@wbfaulk Ah, ok so sounds like you're asking how can we be sure we didn't make a faulty assumption in the global scope, i.e.: a whole axiom, rather in a hypothesis of a theorem, whose conclusions would just be limited to the scope of the theorem, but whose entire implication remains true even in the global scope. I've dabbled a bit in Agda, and not at all in Lean, but I assume that Lean probably has a similar mechanism. In Agda, you can postulate axioms, which are theorems don't require a proof and that the checker assumes to be true. Indeed, if you postulate something like 2+2=5, you will be able to prove a false theorem (in the global scope). The way to avoid this situatuon is to disallow use of postulates. I think you can still prove all of constructive mathematics without using any postulates. Secondly, you can define new data types in Agda and prove theorems about them. For example, you can define the natural numbers, integers, rational numbers, real numbers and beyond using just inductive datatypes, and the rules of logic inference. One way this could go wrong is if you define the datatype incorrectly, then proceed to prove true theorems about what you think the datatypes are, but really about the datatypes you actually defined. So I think one place to be careful is that the datatypes you define really are what you mean them to be. If both of these guidelines are followed, I think it's impossible to prove a false theorem, as far as we know. Technically godel's incompleteness theorem says we can't know if math is internally inconsistent, but it seems to have held up well so far. Hopefully I understood your question. If so, does this answer make sense?
I've been using Idris, in which the first tautology would be implicationIsAntiComm : (p -> q) -> Not q -> Not p -- Not p = p -> Void implicationIsAntiComm im nq p = nq (im p)
You can write proofs in the same style in Lean (which is also dependently typed) as well, by the way: `example : (p → q) → ¬q → ¬p := λ h hnq hp => hnq (h hp)` (this is Lean 4, but 3 is similar)
An axiom is something that you assume is true without providing any proofs, it is a building block of all further reasoning. A tautology is a statement which is true regardless the input.
Inductive proofs are proofs by contradiction in disquise. Say we want to prove statement A. You make an inductive argument where you show that if the case _n_ is true, then _n+1_ is true. Now suppose for the sake of contradiction that there is a smallest counter example to statement A. This means that some case _m_ is false. But _m_ being the smallest means _m-1_ is true, and by your inductive argument this means _m_ is true, which is a contradiction. Hence, there is no smallest counter example and thus there is *no* counter example.
I find this format impossible to follow along with, lessons shouldn't be improvised like this and then cobbled together, this is probably a 30 second gif if done properly.
I understand proofs how they're traditionally taught in school, but I couldn't follow what this guy was showing. The start seemed straightforward but quickly I realized I had no idea what he was doing. Each sentence made sense but I didn't get how this kind of proofs works.
It is constructive logics using type theory. The very important notion here is to understand that types are propositions and values are proofs, and of course, this is second order logics, so a function is a value, which means a function of type T, is a proof of T. The notation in general is (X: T) to say, X is of type T or "X is a proof of T", (A: T) -> B to say a proof A of type T implies in a proof of B So a function that has the type P -> Q, is a proof that if P then Q. not Q is the same as Q -> False, which means if Q then False, where False means an absurd(so you refute). So how do you use those functions? You assume that the parameters exist, so you assume P and Q to exists, then you assume P -> Q to exists, then you assume that Q -> False. And you want to essentially make is a function where P -> False, so if all those assumptions above are right and you have P, then you can show that something is contradictory. Now to do that, you assume that a P exists, and your goal is to have a False, so you want to show that this can not happen, because you assumed P, you can call the function that does P -> Q, then use the function that does Q -> False, meaning that you endup with False at hand. I personally like to look at the proof itself, which would be something like forall P Q: Prop. (P -> Q) -> (Q -> False) -> P -> False. Which can be read as forall proposition P and Q, if a function from P to Q and a function from Q to False exists, then it shows that a function from P to False. But a function from X to False, can be read as "not X". Translating Which can be read as forall proposition P and Q, if a function from P to Q and (not Q) exists, then it shows that (not P).
Very bad explanation. it is not clear until the end what does he means when he writes "pq". because the next line "nq" might mean "Q=False" or "Not Q". so what does "pq" means? "define variables P and Q" or Q=True or what???
He says "The left-hand side which I call pq" as he writes pq. The left-hand side of the example (P → Q) → (¬P → ¬Q) is the part before the central arrow: (P → Q). Similar to how in 1+2=3 the 1+2 is referred to as the left-hand side. Hope this helps :)
pq doesn't mean anything. It's just a name he gives to the newly created proposition. And since that proposition is P -> Q, he named it pq. But agreed that it's not super clear.
Yes, he didn't explain that part, which makes it confusing to understand. Actually, what happens has to do with the key word "assume" in the programming language. Whenever you have something left to prove which is of the form A->B, then you can write in your program the line "assume x", where x is any string of letters. After this, the logical formula A will have the name "x". This transforms the statement, and now you have to prove the logical statement B. The reason behind is, is that proving the mathimatical statement "A->B" is equivalent to showing that whenever A is assumed, then B must automatically be true. Hope this helps.
We dont Go to the Zoo but the sun is shining...because the Car broke. So, no Check If you missed a variable... (But Sounds nice to free Up RAM with Software If you writte Software;-) Greatz from Germany and have a nice Day opo
I dont get how a proof can be wrong then how is that better than a bug in a program, if it can be wrong it proves nothing, like the proof all horses are the same colour, then its not a proof!!? 😥
@@f16madlion Right - there's a technical term for proofs that are not wrong. They are called 'valid proofs'. The horse proof is not valid, there is a mistake (can you find it?). Lean, when working correctly, checks if a given proof is valid. There are (and will be) bugs in lean or any other proof checker, as there are and will be mistakes in human made proofs. But the point is that these mistakes can only be corrected. A proof that many people have checked, or that is verified from extremely well tested and managed code, is very likely to be valid. What's more, the way to check if a proof is valid is, ultimately, human comprehensible. The rules are simple if you get low level enough. If you ever really care about something, you can check.
He is talking about the importance of making sure we only accept alleged-proofs that actually are proofs, and pointing out the danger of informally stated alleged proofs in natural language. The danger of, you give me an argument which seems like a proof, but actually it has a mistake that I don’t notice. When we make our arguments formal to the point that we can have a computer check it, then (assuming the proof checker program on the computer is correct) then we can be sure that the argument actually is a proof.
Are you asking A) "what is a programming language?" or B) "what is _the_ programming language [which is being used for this program]?" Sorry, I don't mean to be pedantic, but from your question where there are some missing words, it's impossible for us to be sure what you're asking. I'll leave the final answer to your question to someone who knows what you want after you clarify your question, because I'm more a spoken language linguist! I like learning about maths and computing, but I don't know the answer to B, and my answer to A would undoubtedly be answered better by a programmer!
so, why exactly are the people you are training not allowed to use the tools you are using? got a bit lost on that point. Shouldn't you train them to use those?
this guy is so abjectly awful at explaining things. it's incredible. i really feel for the UNott students forced to have him. he has absolutely zero capacity to explain new concepts as they're introduced, or empathy for students
I agree. I don't understand most of what this guy says. The videographer cannot possibly be able to keep up with the explanations either. All the people saying how great he is seems to already understand the concepts and are just patting themselves on the back. It's this channel for novices or not?
8:13 , it's not true: if you don't go zoo, it could be sunny. you could don't go to the zoo while sunshine because of an urgence of any others reasons. And it's because you speak about a life exemple, not with mathematical objects. So it's obviously a tautology in the domain of mathematical objects, but outside it depend of the nature type of propositions.
The statement is that "if it is sunny, then I absolutely will go to the zoo, without exception." This meaning is clear just from "if it is sunny, then I will go to the zoo". The fact that one makes exceptions in real life or lies about their intentions doesn't mean the statement's meaning is unclear.
@@yinweichen i was starting to write another counter arguments, BUT I had the thought: The statement is about the naive intention, not on what really happen.
Hey cool, my tactic (itauto) got featured on computerphile! I usually read it as "i-tauto" rather than "i-t-auto", it stands for "intutionistic tautology". There is also a "tauto" tactic which does classical tautologies but itauto produces nicer looking proofs which probably made it a better candidate for this demonstration.
nice accomplishment ✌
Really cool 😎
@@hobrin4242 goal accomplished 🎉
Ha. As in "intuitionistic logic". That makes more sense. It's funny how easily things can get misread.
Yep. For anyone wondering what intuitionistic logic has to do with nicer looking proofs: Lean is based on lambda calculus, which you can see in those printouts at the end, and the "natural" proof language for lambda calculus is intuitionistic logic (this is usually known as the Curry-Howard correspondence). For the law of excluded middle you just have to add it as an axiom and apply it like a theorem (and you can, funnily enough, use itauto to do that, leading to "itauto!" which is "itauto but try harder" i.e. not really intuitionistic). By contrast, "tauto" is more like a classical SAT solver, it turns everything into CNF eagerly and this generates a horrible looking proof term.
I absolutely ove listening for Professor Altenkirch's snarky comments. He just drops them in like no big deal and keeps going.
Very interesting video. But Sean, I think you should interrupt your guest more often, especially during more dense explanations like the one that starts at 10:29. I feel that if this was on Numberphile, Brady would've interrupted prof Altenkirch several times, making him explain in simple, ELI5-like terms what do "assume", "apply" and "exact" keywords _mean_.
My guesses:
"assume" assumes that the current remaining left-hand side is true, so you only focus on the right-hand side.
with "apply" you express the remaining right-hand sides with previous assumptions.
"exact" I don't exactly know
I felt there were many parts of his explanations which I couldn't follow
@@TheAudioCGMan We shouldn't have to guess..
assume - Like a formula that has a "goal", or desired output. (A question)
apply - Like putting arguments into the formula, but those arguments have to be compatible. (things that are true, that are supposed to help answer the question)
exact - Returns the formula that satisfies the goal, based upon the given arguments. (formula that is true)
Im actually being taught by this professor at the University of Nottingham currently and its for ACE (algorithms, correctness and efficiency). This module touches upon LEAN. So here's what the keywords do
assume - assume that a premise is true
e.g. P → Q
We assume P so now our goal is to prove Q
apply - this means to apply an assumption
e.g. (P → Q) → (Q → R) → P → R
So firstly, we need to use assume to assume the implications given to us so we have
assume pq,
assume qr,
assume p,
Now we are left to prove R
So we use apply finally!
We write:
apply qr,
This is because in our assumption qr, R holds true if Q holds, proving it.
Now we need to prove Q
Well we apply pq of course so we write
apply pq,
this is because when p holds, q holds, thus proving q.
now we have to prove p, well now that brings us to our next keyword "exact". Exact is used when you have a premise that is "exact"-ly the same as your goal.
So we exact p
Here is the full code:
theorem C : (P → Q) → (Q → R) → P → R :=
begin
assume pqr, -- assume left side P → Q
assume qr, -- assume new left side Q → R
assume p, -- assume leftover variable from last assumption p
apply qr, -- apply qr because R holds, if Q holds (if we are at the zoo we are happy)
apply pqr, -- apply pqr because if P holds Q holds (if it is sunny, we go to the zoo)
exact p, -- p holds in this context which matches our original theroem
end
Lesson learned. To automatically prove something, assume everything, then apply what you just said and you have a proof.
I know you are joking, but: you can only assume what is on the left side of the implication arrows in the statement you want to prove. In this example there are three arrows, so he assumes three things.
Also, the order in which you make the assumptions is important, since the inner assumptions have to be embedded in the outer ones, and you cannot apply statements which are out of scope.
Love it mate
@@koenlefeverno, you assume everything (both lhs and rhs) and prove it's true
@@broccoloodle No. You can always prove True regardless of the truth value of lhs and rhs.
I keep coming back to those basics videos because I STILL don't know how to properly use the software. I'm gonna cry
"I should mention lean is free" *scratches nose*
Blink twice if the Microsoft marketing team is holding you hostage
I tried setting up lean in vs code last year and found it quite difficult to get into.
Hopefully accessible tutorials now exist.
Yeah, don't. There's a WASM version of leanprover. Use that. You don't need to setup anything just send your browser there and it's working.
I could listen to him for hours. His german-english akcent is very calming. I constantly must think of Arnold Schwarzenegger.
Schwarzenegger is Austrian, though. Accent is a bit different.
@@douro20 Also, Schwarzenegger sexually assaulted a lot of women (he admitted to inappropriate behavior in 2018, after being accused back in 2003 by numerous women). I have no reason to think this guy is the same.
I'll be back ... on Numberphile
this is proof that it helps having a script. all those excursions about who held a talk when and where and how interesting it was: nice. does not help if you do not break down the basics first.
I have to say, I can't remember a video from this guy that I enjoyed. He seems to explain only to those who already know it.
Can someone explain the horse induction argument? I don't get it.
Hopefully someone can comment with a video giving the joke in detail.
Claim: All horses are the same colour. Actually we will show something stronger - every (finite) *collection* of horses are all the same colour.
We argue by induction on the size of the collection. Our base case is when there is one horse in our collection. Since there is just one horse, clearly it's the same colour as every other horse in the set.
Now we prove the induction step: that if every collection of n-1 horses all have the same colour, then every collection of n horses all have the same colour.
So, take any n horses. Pick your favorite from the horses, say A. If we remove A from the set of horses, we have n-1 horses left. By the induction hypothesis, all are the same colour.
Now pick another horse, B (not A). Put A back in the collection, and take B out. We again have n-1 horses left, and so again they are all the same colour.
So A is the same colour as all the rest of the horses, and B is the same colour as all the rest of the horses. So all n of the horses are the same colour, and we've finished our inductive proof.
There are only finitely many horses that have ever lived, so that's a finite collection. Hence all horses have the same colour.
Challenge: find the mistake.
@@ArtArtisian I think this doesn't work for n=2, where there are no two overlapping subsets on which we can apply the transitive property of equality.
Here's the argument made: Consider the statement "forall n, a group of n horses has the same color". We'll prove this by (strong) induction on n.
In the case n = 1, our group of horses only has one horse, so clearly the whole group has the same color.
Now assume the statement "for any n
@@ArtArtisian But by choosing any A ("your favourite horse") in the set S (S has n horses, where n>1) we are assuming that A is arbitrary, that is that the induction step would work for any horse chosen: A is just a label for that choice. The induction step that says the other n-1 horses are all the same colour is ok, but of course A itself might be a different colour!
When we later pick B!=A, we are violating the assumption that the choice is arbitary, because the second choice B has some assumptions added to it other than the fact that it lives in S.
@@gregoryfenn1462 No, that's not a problem. There is no reason why the choice B has to be arbitrary. A is arbitrary and B is arbitrary but not A which is perfectly valid and doesn't violate any "assumption that A is arbitrary". In fact, there isn't even a reason why A has to be arbitrary. You could as well say it has to be the youngest or biggest horse and B is the opposite. That part of the proof would work out all the same. The part where it falls down is that you won't have any other horses left to form the connection (i.e. be in both sets) if you only have 2 horses, so the induction step doesn't work only specifically in that case. But ofc that alone is enough to break the whole proof.
Any Prof. of Logic knows that you only need to ask one question: do you own a dog house?
Brilliant, Rip the goat!
More Professor Altenkirch! the best
I would argue that Professor Brailsford is better, but maybe it's just his very relaxing voice and intimate knowledge of early computing. Altenkirch is extremely knowledgeable as well, but his voice is just not as nice to listen to. Sorry professor!
I might be showing my age but is it bad I always think about this prof in the Nakatomi tower with Hans
hmmm i think i have to rewatch this video in the morning with a fresh brain...
Logic is a very complex topic, for something that is so simple. And it complex because it is so easy to make a minor mistake in the chain, and that invalidates the entire chain.
@@JohannaMueller57 Speaking of a failed logic chain....
I wonder if Large Language Models coming from machine learning will help with mathematics research. I could see a mechanism where an LLM suggests a proof and Lean checks it.
Someone must already be thinking of this.. right?
probably not the most efficient way to do it
@@JasminUwU yea I could see that
see open AI's blogpost Solving (Some) Formal Math Olympiad Problems where they do something like this to write Lean proofs for mathematical olympiad problems.
Admit it you read the video title with a German accent
Haha no, but I went back and did it!
I think it's a Belgium accent
Just took a class in college with Professor Pete manolios that used ACL2 and ACL2s to nearly automatically prove lemmas about programs
He teaches Lean now, interesting class
You'd somehow think some of the utility of the program is rather niche but I've certainly used it to simplify quite complex systems of equations, which is a thing that comes up a weird number of times when you're solving various things and you need to create an easily solvable system of equations that does not use any previous variable so that you can easily run it on a computer.
Ultra high quality proofs, I expected nothing less from a German Professor of Computer Science.
Also, this whole video was an 'r/whoosh' moment for me. 🤯
This guy and this video belong in a Computerpile 2 channel, targeted at people who don't need things explained to them, aka. not novices.
Waiting for some prog synth riffs.
Awesome computer guy! Smart, wild hair, unshaven, Hawaiian shirt. I don't need no stickin' induction. I do my proofs the hard way ... with computers!
Does anyone who doesn't already know this stuff follow any of this at all? I mean, I follow the logic of his proof, but the Lean language is totally opaque to me. I have no idea what any of it is doing. Why do the "assume" statements grab the parts of the theorem that they do? (In particular, why does the third "assume" grab "P" and not "¬P"?) What does "apply" and "exact" mean/do?
Yeah I have no idea what those statements are meant to accomplish. I guess the moral of the story is RTFM.
Well, it's ¬P that you want to prove, so you can't just assume it.
In a goal (i.e. "statement that you want to prove") of the form `P → Q`, you can use `assume` to give the assumption `P` a name, so that you can refer to it in your proof. Since you've given `P` a name, this turns your goal into `Q`. E.g. here, `assume h` assigns the name `h` to the assumption `P`. In Lean, `¬P` is defined as `P → False`. You can convince yourself that these are equivalent with a truth table if you'd like. Since `¬P` is just `P → False`, `assume` will assign a name to the assumption `P` that is hidden in `¬P = P → False`.
If your goal is `Q`, then you can use `apply` with an assumption `P → Q` (that you reference by the name you gave it) to transform the goal into `P`. E.g. `apply h` will turn your goal into `P` if `h` is the name of an assumption `P → Q`. In natural language, you can read this as "In order to prove `Q`, it is sufficient to prove `P`, because we know that `P → Q`".
`exact` is even simpler: If your goal is `P`, then an assumption `P` will prove it. So you just give `exact` the name of the assumption `P` and it will conclude the proof, e.g. `exact h` will conclude the proof for a goal `P` if `h` is the name of an assumption `P`.
@@sqrooty I appreciate your response, but I still have questions. Why does the first "assume" grab a whole assertion (P→Q) and not just P? Later on, it grabs just a single premise, even though it's the beginning of an assertion. Why can't I assume ¬Q? Apparently I have to "apply" it before it actually *does* anything. The "assume" statement doesn't really seen to do anything other than assign variable names. And why do I need those names anyway? There are already abstract names for the premises and assertions.
None of this makes sense to me. I have to assume three things, but then I have to "apply" two of them, and then "exact" the third. But I didn't really make any decisions in there. I assigned names to three things, that the computer told me I needed to assign names to, and then applied the assumptions, but one of the applications was "exact". Were the others inexact?
I should probably go read a real tutorial. I'm glad Prof. Altenkirch wasn't my professor at school. I prefer it when my teachers actually teach things.
"1+2=3, yeah? Well, you assign 'a' to 1 and 'b' to 2, and 'c' to 3, and then you assume 'a' and 'b' and apply 'plus', and then you 'exact' c! It's totally clear!"
@@wbfaulk
> Why does the first "assume" grab a whole assertion (P → Q) and not just P?
Because `(P → Q)` is the assumption of `(P → Q) → R`, where R is some arbitrary proposition. You don't know that P is true, you just know that P implies Q.
> Why can't I assume ¬Q?
This is exactly what `assume nq` does in that proof.
> And why do I need those names anyway?
One answer: Because referring to the full propositions by name is more comfortable. Why do you write "By lemma N" in a mathematical proof, and don't write out the full lemma statement every time?
There are other sensible answers to this question, and there are ways to write proofs without assigning a name to every assumption. But this is out of scope for the basic proof shown in the video.
If you'd like a technical answer: Lean implements a form of natural deduction, where each proposition has associated "introduction" rules (rules that let you construct a proposition) and "elimination" rules (rules that let you do something with a proposition). `assume` is just the introduction rule of implication.
> Were the others inexact?
It's called `exact` because you give it *exactly* the assumption that proves the statement. `apply` on the other hand takes an assumption `P → Q` and turns a goal `Q` into a goal `P`.
> I should probably go read a real tutorial
You might benefit from reading an introduction to mathematical proof first. Arguments of the form "Let ε > 0, let x ∈ ℝ, so ...." are very common in mathematics as well, except that here we effectively write `assume` (using "let") without a name. Something of this nature works in Lean as well, by the way - You don't have to give every assumption a name, but you still need to write `assume` ("let") at the start.
> I prefer it when my teachers actually teach things.
I wouldn't speak so harshly about the exposition in the video. Perhaps Altenkirch is just targeting folks with a different mathematical maturity from yours. I agree that this introduction isn't amazing, though.
@@sqrooty Coming from a math and CS background, his introduction was just fine tbh, it's a bit sad that he gets so much hate. Perhaps Computerphile should add a note about the target audience, although I understand this may not be ideal.
Just what I was looking for!
yeah I don't really get what this 'apply' means
Fun fact: Magic: the gathering uses Lean to develop their cards and their online platform (Magic on-line) for rulesets!
Huh, I haven't heard anything about this yet. Do you have an article that explains that they use it?
I would be interested in hearing more about this.
The real cause for citations: you said something cool and I want to read about it too.
If pigs have wings, then a crayfish whistles on a mountain.
that's true
I have a question: The method of proof presented here seems quite counter-intuitive to humans. The "human-style" proof would be to assume (P->Q), assume (not Q) and assume P, then use the Modus Ponens logical rule on (P->Q) and P to deduce Q, and then find that both Q and not Q hold, a contradiction. Instead in this video prof. Altenkirch presents us with the fact that (not Q) is equivalent to (Q -> False) and uses "apply Q", which in my opinion is very counter-intuitive and not very helpful for understanding. Do you always have to think this "reverse" way when using lean?
The issue here is you can't directly assume P, because you're not given P as a hypothesis: you're only given "P -> Q" and "not Q". What you really want to do is say "if I *were* given P as a hypothesis, that would lead to a contradiction"; in other words, you want to prove "P -> False". The way to do this is straightforward: Given a proof of P, use the "P -> Q" hypothesis to get a proof of Q, and then the "not Q", or "Q -> False" hypothesis to get a contradiction (i.e. "a proof of False"). But this is exactly what "apply Q" does.
As for reasoning in the reverse, that's generally a feature of the tactic language you find in proof assistants. It allows you to focus on the goal ("not P") and apply various reasoning steps to get the goal closer and closer to a hypothesis until you're done. Lean does allow you to reason "forwards" by directly constructing the proof term you want: in this case it would be "exact nq (pq p)".
@@rhaegav-targaryen Thanks for the detailed answer, I understand it much better know. But still, I would argue that the reasoning presented here in the lean program is "backwards" and non-intuitive. For example, in terms of the example given at 7:55, the way the lean program is written translates to:
"I know the rule that whenever the sun shines we go to the zoo" (assume P->Q)
"Assume we don't go to the zoo, I have to prove the sun does not shine" (assume not Q)
"Assume the sun shines, I have to prove a contradiction" (assume P)
"For a contradiction, it would be enough to prove that we go to the zoo" (apply not Q)
"For a contradiciton, it would be enough to show that the sun shines" (apply P->Q)
"But we know that the sun shines, so we're done" (apply P)
I would argue that the first three lines are intuitive, but the last three lines are exactly in the wrong order to be easily understood.
@@XBrainstoneX Yup! You're right, the way the Lean program is written is considered "backwards" reasoning. Whether its more or less intuitive than its forwards counterpart is probably subjective; I think I agree with you that for simple examples like this, forwards reasoning feels more natural. But with more complicated proofs it becomes necessary to use the tactic language rather than directly constructing a proof term (e.g. with "exact"), and the tactic language naturally manipulates the goal (i.e., reasoning backwards trying to get the goal to match one of your hypotheses) rather than manipulating the hypotheses (reasoning forwards, trying to get a hypothesis that matches the goal), and that's probably because while there may be many hypotheses, there's usually only one goal in focus at a time.
Its maybe worth mentioning, though, that Lean _does_ let you directly manipulate the hypotheses as well, with the "at" keyword (for example, saying "rw X at H" where X has type 'a = b', and H is a hypothesis containing the term 'a', and 'rw' stands for 'rewrite')
Perhaps the way to look at it, is that automated proofs are still in the early stages of development. Like assembly once was the state of the art in computer programming. Simple calculations, obvious to us, took pages and pages of code.
Of course, 'counterintuitive' is not about the detailedness of the process, but just to highlight that this is an early development. Perhaps in the future, automated proofs will be presented differently, and more in line with our intuition.
@@dewaard3301 Eh..... if you want to see "early stages" you should try Automath sometime
I LOVE LEAN‼
This really looks like Coq
12:17 : why here it's an equal symbole ? what it's the diff beetween "non(P) = (P => False)" and "non(P) => (P => False) ? maybe by = you mean , no ? (what is the diff beetween and = btw ? I would say "=" compare values , and compare proprieties, but not sure)
Yeah but that only works for discrete horses; as soon as any one horse looses a limb, the whole team of horses will fall apart (a terrible chore to clean up after).
I want more videos with prof. Altenkirch.
I've always used Coq, but this looks cool too
Whoa, Lean3 :) What do you think of Lean4 ?
So you can't use meaningful variable names in this language? It looks like it's making special interpretations of "n" as a prefix and the fact that "p" and "q" are consecutive.
... have to say at the end that the explanation here was rather poor. I worked on a very simplistic theorem prover (convert axioms to DNF, invert theorem, apply modeus ponens till contradiction or no more unifiable statements) in college because a textbook discussed them and they looked interesting, and even given that experience (surely more than the expected target audience) I feel like I have no idea what any of that syntax means or is trying to do, or therefore what the video is trying to get across.
@Anne Baanen But the interpreter apparently magically knew what "np" meant.
@@wbfaulk The interpeter knew that the thing to prove was not q implies not p. When the user does an assume, it assigns the hyptothesis, not q, to the variable the user entered, and what remains to be proved is the conclusion, not p. Whatever name the user would have entered, it would still have been assigned not q. nq is just a memorable name, but it could be called anything.
Imagine working on a theorem proover, and then immediately thinking one person's variable-naming style is a universal truth.
@@redjr242 He has three "assume" statements, without an explanation of what they are doing. I don't doubt that Lean is doing the right thing. It's just that this video doesn't explain it at all.
I'm glad I'm not relying on Professor Altenkirch (at least as edited here) teaching me anything I need to know.
@@okuno54 Imagine demonstrating a mathematical language and not explaining what any of the operators actually do.
I'm just working through those exercises in lean4. So far, lean is really good.
It would be amazing to have all proven math in Lean and you could just verify it all at the press of a button.
The use of symbols like that reminds me a bit of APL which is a rather difficult language to learn due to its use of non-intuitive symbols for most of its basic functions.
In general, practically all the notation used in Lean is standard mathematical notation. There's no funky symbols that are specific to Lean like there are in APL, for example.
@@sqrooty I think in the category theory library they use like, a longer arrow symbol in order to give the type of morphisms between two objects, because they can’t use the same Unicode character as they use for the type of functions between two types ?
Though I last looked at that a couple years ago, so I could be wrong, or it could have changed (I know that the category theory library had a substantial change after I started my incomplete effort to use it. In retrospect it may have been a bad choice of a first thing to try to use in Lean )
@@drdca8263 Yeah, the category theory library was certainly one of the harder parts of mathlib to use. And you're right, there's still the occasional funky symbol here and there, but I'd hope that this doesn't happen at the level of this video yet :)
Me: a line is 2 or more points
Him: a tesseract is a just a square
Thorsten is on! Gather 'round!
7:35 : why they are not necessary ? is, and if yes, why the expression ((p => q) => non(q) ) => non(p) equal the expression (p => q) => (non(q) => non(p)) ?
> why the expression ((p => q) => non(q) ) => non(p) equal the expression (p => q) => (non(q) => non(p))
It's not. Implication isn't associative. But the brackets are not necessary, because in Lean, the implication operator `→` is by default what we call "right associative", i.e. `a → b → c` will parse as `a → (b → c)`. This is just for convenience and in line with the notation of functional programming languages (if `→` refers to the function arrow, not implication).
@@sqrooty ok, thanks.
It's all fun and games, until some geneticist watching this video creates a pig with wings, just to break your poof. 😁
without even watching this I can deduce that no one has yet figured out how to make a mathematical proof AI . If and when we do we will be instantly left in the dust unable to understand the proofs.
that prof. is more tolkien and metal as the time goes by
He's the archetypal maths professor that non-mathematicians have in their minds!
imagine of some math profesesors spend their entire life to prove one theory... and this guy proves 65,000 theories with the computer in like 1 year LOL
It would be interesting to understand some use cases of when to use something like Lean and when to use something like Agda - is it that Lean is more proof assistant with some functional programming and Agda is more programming language with some proof assistant or…? Time to do some digging.
Lean 4 is a fully-fledged programming language and self-hosting (i.e. it is implemented in itself). The differences between Lean and Agda are mostly in terms of the type theory they use, what convencience features they provide and the way that each community writes proofs (you'll know what I mean if you look at proof scripts in both).
When I was at Imperial it was still Prolog and then later Micro Prolog.
can u see all the proofs in lean? like in a map or whatnot
Remember filling multible worths of whiteboards during my uni days many years ago solving these formal proofs. Would have killed for having this IDE plugin!
Is this like coq? What are the differences?
Yes. It makes some choices which are more convenient for formalizing math.
Iirc it has quotient types built in to the type theory, whereas Coq I think needs to use setoids instead?
@@drdca8263 I think the Coq people take offense with this, because you can simulate quotients using another Coq feature :)
In terms of the raw type theory, Coq and Lean are very similar. It's mostly the elaboration layer (i.e. the layer that converts user input into type-checkable terms) where they differ. Unfortunately, I don't know enough Coq to give a fair account.
@@sqrooty Thanks for the correction!
I *really* don't like tactics. lambdas are way more familiar to me as a functional programmer (and as a mathematician), especially because it's quite hard to follow the types of expressions using tactics.
here's my solution to the challenge in swift:
typealias Not = (A) -> Never
func challenge(_ fun1: @escaping (T) -> Not,
_ fun2: @escaping (@escaping Not) -> T) -> Never {
let tFromNothing = fun2{assumeT in
let notT = fun1(assumeT)
return notT(assumeT)
}
return fun1(tFromNothing)(tFromNothing)
}
I only wish Swift had a type system suitable for more complex proofs, because I much prefer this syntax.
I've never seen Lean before, so my question is: I don't understand the syntax of the proof, what do the expressions pq, nq and p mean? Are they a different notation for P -> Q, !Q and P? (Sorry, I don't know how to type the mathematical logic symbols)
If that is so, then why use two different notations? What happens if you name a variable N, does nq then mean !Q or N -> Q?
Or are the expressions just place-holders and are by some rule related to parts of the example?
I have not used Lean and I don't really know much about logic. But it looks like that are just variable names for the next part of the input. Basically "placeholder" names.
Yes, they are just variable names chosen by the author in such a way that they closely resemble the type. You could call them whatever you like.
This video is really badly explained
Why are the brackets not needed? Wouldn't (P→Q)→¬Q→¬P be the same as ((P→Q)→¬Q)→¬P, which is wrong (for P=true and Q=false)?
Lean probably has right associativity for that operator.
I understood exactly nothing.
The inductive proof that all horses are the same color fails in the case of n=2.
I watched that video too, busted 😂
@@JohannaMueller57 You a hater frfr
Imagine using Lean to teach math from kindergarten on.
If the language just lets you assume things, what prevents you from assuming an illogical thing?
(I tried to test this on my own, but apparently you can't just open a file and enter code. You have to first create a directory to contain your file with a utility called "leanproject", but the binary download doesn't have a command named "leanproject", so I then tried to install it via the VSCode extension, which pretty much just downloaded the same thing and still didn't include "leanproject", and after several minutes of trying to figure out where this mystery command came from gave up and deleted everything.)
If you assume something false, you'll be able to eventually "prove" a contradiction
That's the cool thing. You can assume anything you want in the declaration of a theorem! An implication is basically a function. Just as you can define a function to have any input, you can define an implication to have any hypothesis, even false! You won't be able to prove 2+2=5 on its own, but you can prove a theorem that assumes 2+2=5 and concludes something else. In fact, a theorem that assumes false can conclude any proposition. It's called the principle of explosion.
The conclusions you come to using an assumption are only valid (or true) if the assumptions are valid (or true). It's really no different than real life: if you assume vaccinations cause Bad Thing™, then we shouldn't be vaccinating... but vaccinations don't cause Bad Thing™, so that conclusion is useless.
Right, and my point is if this is a tool to hold a proof to rigorous standards, in hopes of finding an invalid conclusion hidden somewhere in the middle, what's to prevent that invalid conclusion from being hidden in the things that you've told it to assume?
@@wbfaulk Ah, ok so sounds like you're asking how can we be sure we didn't make a faulty assumption in the global scope, i.e.: a whole axiom, rather in a hypothesis of a theorem, whose conclusions would just be limited to the scope of the theorem, but whose entire implication remains true even in the global scope.
I've dabbled a bit in Agda, and not at all in Lean, but I assume that Lean probably has a similar mechanism. In Agda, you can postulate axioms, which are theorems don't require a proof and that the checker assumes to be true. Indeed, if you postulate something like 2+2=5, you will be able to prove a false theorem (in the global scope). The way to avoid this situatuon is to disallow use of postulates. I think you can still prove all of constructive mathematics without using any postulates.
Secondly, you can define new data types in Agda and prove theorems about them. For example, you can define the natural numbers, integers, rational numbers, real numbers and beyond using just inductive datatypes, and the rules of logic inference. One way this could go wrong is if you define the datatype incorrectly, then proceed to prove true theorems about what you think the datatypes are, but really about the datatypes you actually defined. So I think one place to be careful is that the datatypes you define really are what you mean them to be.
If both of these guidelines are followed, I think it's impossible to prove a false theorem, as far as we know. Technically godel's incompleteness theorem says we can't know if math is internally inconsistent, but it seems to have held up well so far.
Hopefully I understood your question. If so, does this answer make sense?
But not-P does not prove not-q. P-Q does not say that there cannot be another cause for q.
where can we get lean?
Came after AlphaProof used lean to get a silver at IMP.
I've been using Idris, in which the first tautology would be
implicationIsAntiComm : (p -> q) -> Not q -> Not p
-- Not p = p -> Void
implicationIsAntiComm im nq p = nq (im p)
You can write proofs in the same style in Lean (which is also dependently typed) as well, by the way:
`example : (p → q) → ¬q → ¬p := λ h hnq hp => hnq (h hp)` (this is Lean 4, but 3 is similar)
I would expected they used Coq rather than Lean
as nice as the p and q example was, I feel like I could have followed it better if it stuck with the concrete example of the sun shining and the zoo.
What's the difference between a tautology and an axiom?
An axiom is something that you assume is true without providing any proofs, it is a building block of all further reasoning. A tautology is a statement which is true regardless the input.
Congratulations you proofed that contraposition works. And the other one is the law of contradiction.
i have no idea what he is talking about or what he is doing...
english is not my first language but i´m pretty good and can follow all other vids...
It's not just you. He didn't explain *anything* .
If Arnold Schwarzenegger was a mathematician…
Proof by contradiction is always nice but proof by induction is always a leap of faith for me
Inductive proofs are proofs by contradiction in disquise. Say we want to prove statement A. You make an inductive argument where you show that if the case _n_ is true, then _n+1_ is true. Now suppose for the sake of contradiction that there is a smallest counter example to statement A. This means that some case _m_ is false. But _m_ being the smallest means _m-1_ is true, and by your inductive argument this means _m_ is true, which is a contradiction.
Hence, there is no smallest counter example and thus there is *no* counter example.
this man sounds uncannily similar to Dr. Strangelove ngl
I find this format impossible to follow along with, lessons shouldn't be improvised like this and then cobbled together, this is probably a 30 second gif if done properly.
Teach Prolog to your students
I clicked to see the white elf talk
Dutch sasquach?
I understand proofs how they're traditionally taught in school, but I couldn't follow what this guy was showing. The start seemed straightforward but quickly I realized I had no idea what he was doing. Each sentence made sense but I didn't get how this kind of proofs works.
It is constructive logics using type theory. The very important notion here is to understand that types are propositions and values are proofs, and of course, this is second order logics, so a function is a value, which means a function of type T, is a proof of T. The notation in general is (X: T) to say, X is of type T or "X is a proof of T", (A: T) -> B to say a proof A of type T implies in a proof of B
So a function that has the type P -> Q, is a proof that if P then Q. not Q is the same as Q -> False, which means if Q then False, where False means an absurd(so you refute).
So how do you use those functions? You assume that the parameters exist, so you assume P and Q to exists, then you assume P -> Q to exists, then you assume that Q -> False.
And you want to essentially make is a function where P -> False, so if all those assumptions above are right and you have P, then you can show that something is contradictory.
Now to do that, you assume that a P exists, and your goal is to have a False, so you want to show that this can not happen, because you assumed P, you can call the function that does P -> Q, then use the function that does Q -> False, meaning that you endup with False at hand.
I personally like to look at the proof itself, which would be something like
forall P Q: Prop. (P -> Q) -> (Q -> False) -> P -> False.
Which can be read as forall proposition P and Q, if a function from P to Q and a function from Q to False exists, then it shows that a function from P to False.
But a function from X to False, can be read as "not X". Translating
Which can be read as forall proposition P and Q, if a function from P to Q and (not Q) exists, then it shows that (not P).
I LOVE LEAN
Very bad explanation. it is not clear until the end what does he means when he writes "pq".
because the next line "nq" might mean "Q=False" or "Not Q".
so what does "pq" means? "define variables P and Q" or Q=True or what???
He says "The left-hand side which I call pq" as he writes pq. The left-hand side of the example (P → Q) → (¬P → ¬Q) is the part before the central arrow: (P → Q). Similar to how in 1+2=3 the 1+2 is referred to as the left-hand side.
Hope this helps :)
pq doesn't mean anything. It's just a name he gives to the newly created proposition. And since that proposition is P -> Q, he named it pq. But agreed that it's not super clear.
Yes, he didn't explain that part, which makes it confusing to understand. Actually, what happens has to do with the key word "assume" in the programming language. Whenever you have something left to prove which is of the form A->B, then you can write in your program the line "assume x", where x is any string of letters. After this, the logical formula A will have the name "x". This transforms the statement, and now you have to prove the logical statement B. The reason behind is, is that proving the mathimatical statement "A->B" is equivalent to showing that whenever A is assumed, then B must automatically be true. Hope this helps.
Cant wait for when he teaches IFR for us this coming semester
He got the two fingers into the video! 😂
...well.. Logic..is.. going... wrong.. with.. confidence...
We dont Go to the Zoo but the sun is shining...because the Car broke.
So, no Check If you missed a variable...
(But Sounds nice to free Up RAM with Software If you writte Software;-)
Greatz from Germany
and have a nice Day
opo
I wish I could go to zezoo
Would you marry me?
If I marry you, than pigs have wings.
So, no...
Modus ponens?
Ich kann kein Englisch aber das deutsch verstehe ich gut . 😘
@@JohannaMueller57 hdm ?
I also like *Curry's Paradox,* which proves that any and every (grammatically valid) sentence is true.
Always a must watch, really reminds me of the unicorn from the Pixar movie Luck. :)
OMD...APL!
If Rick & Morty is ever envisioned for a live action remake, this guy should be on the top of the casting list for Rick.
lol....give him 20y
I dont get how a proof can be wrong then how is that better than a bug in a program, if it can be wrong it proves nothing, like the proof all horses are the same colour, then its not a proof!!? 😥
Could one not prove a program safe with a proof that has a bug? how could we then rely on it
@@f16madlion Right - there's a technical term for proofs that are not wrong. They are called 'valid proofs'. The horse proof is not valid, there is a mistake (can you find it?). Lean, when working correctly, checks if a given proof is valid. There are (and will be) bugs in lean or any other proof checker, as there are and will be mistakes in human made proofs. But the point is that these mistakes can only be corrected. A proof that many people have checked, or that is verified from extremely well tested and managed code, is very likely to be valid.
What's more, the way to check if a proof is valid is, ultimately, human comprehensible. The rules are simple if you get low level enough. If you ever really care about something, you can check.
A proof can't be wrong but it can be misinterpreted.
He is talking about the importance of making sure we only accept alleged-proofs that actually are proofs, and pointing out the danger of informally stated alleged proofs in natural language.
The danger of, you give me an argument which seems like a proof, but actually it has a mistake that I don’t notice.
When we make our arguments formal to the point that we can have a computer check it, then (assuming the proof checker program on the computer is correct) then we can be sure that the argument actually is a proof.
What a programming language?
Are you asking A) "what is a programming language?" or B) "what is _the_ programming language [which is being used for this program]?"
Sorry, I don't mean to be pedantic, but from your question where there are some missing words, it's impossible for us to be sure what you're asking. I'll leave the final answer to your question to someone who knows what you want after you clarify your question, because I'm more a spoken language linguist! I like learning about maths and computing, but I don't know the answer to B, and my answer to A would undoubtedly be answered better by a programmer!
@@y_fam_goeglyd Yes. I mean which laguage used on that program
@@itforall89 lean programming language
so, why exactly are the people you are training not allowed to use the tools you are using? got a bit lost on that point. Shouldn't you train them to use those?
Good video as always. Wondering if Robert Miles will return for the next one
this guy is so abjectly awful at explaining things. it's incredible. i really feel for the UNott students forced to have him. he has absolutely zero capacity to explain new concepts as they're introduced, or empathy for students
I agree. I don't understand most of what this guy says. The videographer cannot possibly be able to keep up with the explanations either. All the people saying how great he is seems to already understand the concepts and are just patting themselves on the back. It's this channel for novices or not?
👍
Me chanting Isabelle repeatedly as the video starts
Erlich Bachman is a professor now
I can't say anything positive.
The soonest I've ever been
Sorry, his accent is disturbing...😱
8:13 , it's not true: if you don't go zoo, it could be sunny. you could don't go to the zoo while sunshine because of an urgence of any others reasons.
And it's because you speak about a life exemple, not with mathematical objects. So it's obviously a tautology in the domain of mathematical objects, but outside it depend of the nature type of propositions.
The statement is that "if it is sunny, then I absolutely will go to the zoo, without exception." This meaning is clear just from "if it is sunny, then I will go to the zoo". The fact that one makes exceptions in real life or lies about their intentions doesn't mean the statement's meaning is unclear.
@@yinweichen i was starting to write another counter arguments, BUT I had the thought: The statement is about the naive intention, not on what really happen.
Wiiii