The guy Gérard Huet behind Coq also wrote some data structure he called The Zipper. Apparently he said in conference during a demonstration: "Now I'm gonna open the Zipper and show you my Coq". Source: someone at INRIA who knew Huet told me.
Kevin Pacheco I didn't. I assume the guy wasn't trying to troll us when he told the story during a dinner (with mainly scientists and engineers), because well, he wasn't the trolling type. And you can check that Huet actually invented the Zipper (a kind of tree data structure). Of course, what I said doesn't constitute a proof, but that's all I have.
A Huet Zipper isn't necessarily a tree-like structure, but you can make a Huet Zipper of a tree. A zipper is more or less a structure that gives the notion of a cursor on an underlying structure such that there's O(1) operations on whatever's underneath the cursor, and it has several operations for moving that cursor around said structure. If you want to see some trolling, try out McBridge's presentation to the Haskell Implementors Meeting, '09 about the Haskell Preprocessor.
+Foagik Thanks a lot for the info, I am undoubtedly uninformed about that kind of structure as I never needed to use one. That seems quite powerful, I've downloaded Huet's paper for further understanding. Conor McBride, right? I don't seem able to find this presentation, is it on yt?
Mathematicians should wear robes and pointy hats and be interviewed in front of some copper plated bubbling apparatuses (apparati?). Then replace "theorem" by "curse", objects by "demons", and demonstrations by "invocation". "First evoke a local Riemann-integrable demon. the Fundamental Curse of Analysis allows you make sure you can restore that demon when you observe its growth." Cedric Vilani is the headmaster of the Lyon School of Wizardry and professor of Implicit Demons Taming.
As a programmer, this makes so much sense. I was in a weird situation where i learned algebra first, and then learned programming (and algorithms and data structures), and then more maths, and i couldn't help but think how i could represent the concepts i learned in computer code, but the notation used in maths was set theory, which was slightly confusing some times. Then i noted to a lecturer that a definition he wrote of something could be expressed as a short recursive function, and turns out he knew (among other things) haskell and he agreed that the recursive notation was correct, but pointed out that you can solve some such functions which may be noted recursively using transformations that allow you to do it as one computation instead of a series of computations (iterative), which can dramatically impact the time they take to run, and he had done work on that. Using maths to speed up some heavy logic functions by transforming them. Anyways, my big take-away is i'm fairly decent at maths, but i'm a bad calculator, and i strongly prefer computer science notation over set theory notation. If i had been taught maths through programming the concepts, i would have done MUCH better, and it would also have allowed running the functions and (automatically through code) visualizing the output to gain a better "feel" for it.
As a unsuccessful student of mathematics who went to computer science I think, it's because computer scientists care much more about the style of notation and are better trying to find the right language which describes their current problems.
@@ivanpiri8982 Oof, that was pretty long ago. I forgot what I wanted to provide as examples back then. I'm going to need some time revising, but I'm still willing to provide examples. I would prefer to provide them in Idris purely for syntax reasons. So far I can illustrate the following: *) Boolean logic. *) maybe Existential quantifications I'll probably update the list in a short amount of time.
The jist of the curry-howard correspondence is: In a well-typed programming language (such as Coq, Agda, Idris etc.) Types correspond to theorems Programs correspond to proofs. You write the type, then you prove it by writing a body to that function.
UA-cam is great at suggesting videos. I am starting college and covering set theory, and I realized that there are multiple encodings for natural numbers and that implementation details were being exposed. And then I find this video which mentions the same thing. Amazing.
Underneath Type Theory there is a more foundational mathematical approach called Category Theory, which seems to unify all areas of mathematics. Unsuprisingly, Types form a Category. Just as there is a connection between Homotopy Theory and Type Theory, there is a connection between eg. Set Theory and Topology Theory via Category Theory and the theorems of one theory correspond to theorems of the other. I suggest you look into that.
Thorsten is a great explainer and talks about some really subjects - liked this video! You guys should also track down Kevin Buzzard. He's a great speaker, interesting guy, and he's recently been working on proof formalisation in a language called Lean. He and students have been writing up a whole bunch of definitions/theorems/proofs, and is planning to do his whole department's curriculum.
That was really fascinating. I have heard of type theory but didn't know much more that it resembles types in programming. He contrasts it to set theory which helps a lot and makes sense. What we seem to have here, as he is saying, a potential, qualitative (of course partial) transformation of a field, mathematics, because new tools, computers, bring new paradigms. Fascinating, thank you so much. I would be happy to see some more of this, and maybe some examples from this style of math.
I think if you have a first year discrete math class or even maybe a high school computer engineering class under your belt, you should be able to follow along
We are happy to welcome you mathematicians into the programming world, but there is one thing you need to learn fast and constantly repeat to yourselves if necessary: Single character identifiers are not acceptable! Use full words and clear and consistent naming conventions. Learn it, live it, love it.
Eyyy funny stuff, I just used Coq a month ago for researching a class project. I was writing a paper on using recurrent neural nets to automatically write proofs in a Coq-like language. I was not successful - turns out it's pretty hard to do, lol
I think of mathematical proofs as a mechanism for problem-solving and gaining understanding while producing useful outcomes. Consider machine learning vs. algorithms. Machine learning can develop a function based on a training set that we can use to predict results from other inputs. That's great, but it doesn't necessarily lead to expanding our understanding of the underlying principles, things that we can use to help solve other more general problems.
Surprising as this may seem, Mathematicians seldom bother with sets in practice. In fact they already use mathematical constructions pretty much exactly like how it is described with types. For example the natural numbers N, defined as a set together with a preferred element 0 and a function s:N ->N that gives the successor (programmers would call it next) which is "universal" with this property (in the precise technical sense below), is enough to define N unique up to _unique_ isomorphism (= bijection), and that is perfectly good enough to work with it and ignore any implementation details and it nicely avoids asking silly questions like whether 2 \in 3 because they are not preserved under these isomorphisms. Here a set of natural numbers (N, 0 , s) is _universal_ if it has the following property: for any set X, element x \in X and any function T:X->X, there is a unique function f: N->X such that f(0) = x and f(s(n)) = Tf(n) (from which it easily flows: f(n) = T^n(x)). Of course one now has to prove that such a set exists (surprise surprise, it does).
This is still an implementation detail. Type theory and category theory abstract away all of this and just have the type N. What you're describing is a declarative but iterative way of describing a set, which really isn't any better from my perspective than set builder notation, which is also declarative and how the highest abstraction of iteration is expressed in functional languages. In fact what you're describing is two levels of abstraction lower than set builder notation which is considered implicit iteration while the iterator you're describing here is external iteration
Dr Thorsten Altenkirch At 13:50, you talk about the Univalence Principal, where it is stated that if two things are equivalent, then they are equal. I have interpreted this as "If two things are logically equivalent, then they have the same properties. I shall show by example that this is not always the case: From the premises "forall x (Sx implies Px)", it can be proved that "Exists x (Sx & Px) iff Exists x(Sx) Using this example, I shall prove that if two logical expressions are equivalent, they don't necessarily have the same properties. In this example the property they don't have in common will be truth value. Take both Sx and Px to be "x=x". Substitution gives (1): "forall x (x=x implies x=x)" can prove "Exists x (x=x and x=x) iff Exists x (x=x)" (1) However, by the principal of transposition, it will also be true that (2): "forall x (not (x=x) implies (not x=x))" can prove "Exists x (not x=x) and (not x=x)) iff Exists x (not x=x)" (2) Example (1) has a truth value of true, however that of (2) is false. (x=x is always true, likewise (not x=x) is always false.) Thus if two statements are logically equivalent, they do not always have the same properties. Using transposition again, this means that if two statements have the same properties, they are not always the same statement.
I didn't mean "logical equivalence" but logical equivalence is a special case. Indeed all operations on propositions in type theory are "truth functional". I don't understand your counterexample in particular I don't know what the principle of transposition is. I have no idea how you justify the step form (1) to (2).
It turns out I made a mistake in the proof. I found out that I had used a universal specification rule on a negated universal statement. Because it was negated it was existentially quantified and the rule couldn't be used. Sorry about that. (Thanks for the reply though)
I was surprised that he managed to talked about set, type and homotopy theory, abstract mathematics and new foundations of mathematics without mentioning category theory.
Calle Silver-Granhall I think because category theory is more difficult to explain to a general audience than the other three. Also, previous videos have alluded to type theory and set theory.
Literally every thing he mentioned would have been way easier if he did a quick recap of category theory and what it is. I mean basically everything he talked about was under category theory.
It's about his research for formalizing type theory inside type theory. Basically, if lots of math can be expressed in TT, why not try TT itself as well? Since proofs in TT are executable programs, modeling TT inside TT is also a way of doing metaprogramming and proving properties of metaprograms in the same system.
11:32 "It's true that 2 is an element of 3, doesn't really make any sense, right?" I mean, isn't 3 basically == | | | And 2 == | | so there indeed is a *| |* inside 3. So 2 is indeed an element of the set called 3
Kind of... although in general when you compile type theory it's more along the lines of interpreting it. For example, for a Coq proof to be "proven," all you have to do is compile it and there's nothing left to execute. So in a sense, executing the abstract math happens as your proof gets verified. As a result, it doesn't feature many tools for writing executable code (although it does allow you to "extract" the code into another language that can provide the tools for writing useful programs). You can also use some other languages to compile type theory programs into executable programs (Idris and ATS, for example).
I've always struggled with gaining intuition in abstract mathematical theories because of the set theoretic foundations, (set theoretic definition of topology, for instance)
Is Computer Science consider a subfiled of Mathematics? It has its origins rooted in Set Theory, Graph Theory, and other mathematical fields, so wouldn't it be redundant to say "Computer Science union Math" when describing Type Theory?
There are subjects in Computer Science unrelated to Mathematics. There are subjects in Mathematics unrelated to Computer Science. The symmetric difference is going to be nonempty.
+htmlguy88 Computer science includes a lot of subjects related to the physical and practical aspects of computing, such as real world efficiency, caching, representation of human language text, issues with human-machine interaction, making the work less error prone etc. Mathematics includes subjects that cannot be accurately represented in computers, such as infinitely precise numbers, infinitely large sets, the entire field of mathematical analysis (differential and integral calculus) etc.
Formal methods are getting more and more important as systems grow. For things like blockchain processing one approach to proving correctness is Hoare's Communicating Sequential Processes. Then there's Predicate Transformer Semantics etc.. It all gets very complicated.
Patrick Wienhöft I'm talking on the "looks like science/math" scale. Although I think James could compete with this guy. Oh I forgot the Chemistry guy too!
Computers along with there parts be it Monitors, Screens, Keyboards, Mouses, Remotes, Modems, Scanners, Printers, Floppy Discs, SD Cards, CPU's, and other devices can go hand in hand whether they are desktops, laptops, I pads, kindles, smart phones, cell phones, palm pilots, and other electronic devices and can serve well if they do not over heat or run out of storage space.
While there are several sets of axioms which you can use as base for most of your math there is actually no set of axioms which - you can derive all maths from - is complete (i.e. every true statement can be proven to be true), and - is free of contradictions (i.e. no statement can be proven true and false). This is known as Gödel's incompleteness theorem.
The Peano axioms mentioned in the video are probably the easiest to get started with. However, mathematics wouldn't be what it is if you could not devise your own systems. Bill Gosper famously delivered a proof about polynomial division just by optimising working code. Besides, mathematics is proven to never be complete. Deriving its entirety is an absurd idea. Some things are simply true, even though they cannot be constructed. DeMorgan's rules, for example, are proven true by exhaustion, but are not deduced. And some things are neither true nor false. And there is proven to be no end to such things, no matter how many you define axiomatically.
If I understand the gist of type theory correctly. C# seems to fit the bill. Everything in C# has a type and types have well defined interactions. Even type casts are validated against the type hierarchies. As much as I love the language for it's strict adherence to types, I also hate it for that reason. It takes a lot of weird but commonly accepted workarounds to make a functioning program from such a strongly typed language.
c# deals with types. but it does not directly deal with type theory. c# is weakly typed because because it is possible to get type errors at runtime. languages like haskell are strongly typed but they don't deal completely with type theory. languages like Coq and Agda deal directly with type theory
@FichDich InDemArschInvalidCastException comes to mind (you can write code that casts an object to anything). NullReferenceException also comes to mind, although that is more of a design flaw where strings and objects are always nullable, that i last heard they would be fixing. it's not as bad as dynamic languages like JavaScript though. i use .NET languages like C# regularly
Set theory has the concept of universal set on which a choice function to extract a particular set can be applied.... Like moving from from everything towards being specific. Does universal set include everything that might be constructed?
I just watched this video and to my understanding using HOTT you would be able to show that in fact two different proofs have the same outcome and thus are the same. So you would be able to first create a proof and then verify that in fact it is an improvement (more elegant, easier, whatever) to an already existing older proof... ? Or, the other way around, you would be able to show that a proof is really different from another proof...?
We are talking about a type of logic that has to be necessarily less expressive than First Order Logic (that is used for mathematical proofs) otherwise computers cannot help that much.
insidioso The trick is that constructive mathematics works well with the termination requirements for programs and physical computation. To derive equivalence, you simply have to inhabit the type representing the equivalence. The first order logic is derivable from the theory, which is something set theory cannot do.
I have been studying formal mathematics for my Computer Science thesis for a little while now. And looking back on it, I think Set Theory has its priority backwards. Set Theory tries to take two Sets of elements, and tries to evaluate whether a function definition exists as a pathway between these two sets. Type Theory takes a set and a function definition, and uses it to generate elements of another set, which can be used as proofs for the other set. How I understand it is that: Set Theory is like a funky subset of Type Theory because Set Theory deals with concrete definitions generated from abstract ideas, whereas Type Theory works the other way around. In a way, this is more useful and powerful than Set Theory because you can easily create various definitions of a set of elements that would otherwise be hard to do using Set Theory. It's not impossible. It's just hard...
Mark, that's because the implementation and representation don't matter. As Dr Thor said, you have 1+1+... or binary or decimal... It also doesn't matter what shape the symbols are. I was surprised to find out that the Predators use decimal system the same way as we do, just that symbols for 0, 1, ..., 9 are those funny patterns.
11:35 I'd say that DOES make sense. In order for a set of 3 things to exist, a set of two has to exist in it, and so on. I think pure abstraction in mathematics is a covert Form of idealism. So the definition that you said makes no sense, seems to me that does makes sense if you have a materialist view of the world by wich Mathe matics are done ON the world and not outside of it. This definition in set theory makes sense because numbers are based on perception, by which we can assimilate to distinct objects as Bein the same thing (equality) and grouping them together, which is counting them.
It's not actually that much of a problem since we can use codata. Consider, for example, if we wanted to represent an infinite sequence of binary digits. We can do this by defining a function from the natural numbers onto the booleans, so that if we want the nth digit, we just issue it into the function defining the sequence. Since the function's algorithm is finite, we can represent the full infinite sequence using finite data. Generally speaking, so long as we can define something in finite terms (which is everything we can define in practice, including the concept of infinity itself), we can represent it on a computer using those exact terms.
If it is unable to distinguish between different representations of numbers, is it impossible to do boolean algebra? Is it possible to calculate the result of 42 AND 69 for instance without representing the numbers in binary?
What about whitehead and russell? I guess they came up with such an idea way earlier... And the problem with a constructibility is that very important theorems couldn't be proven f.e. that every vectorspace has a basis or if you like non-standard analysis or even normal analysis you would loose many theorems that are quite crucial.
Did I get this correctly? - Set theory is like a declarative programming language and a set may not be enumerable or infinite (termination problem); - Type theory is more constructive much like an imperative programming language.
No, if anything, it's the opposite, as type theories tend to be functional languages. Set theory isn't like programming at all. It's a completely static logical system which, by default, has no computational interpretation. Type theory exists so that a computer can internalize the semantics of what a program is meant to do. If you want a program to sort a list, you can use type theory to define what it means to sort a list, then a type checker verifies that the program meets that specification within that type. What's significant is that, at the extreme ends of expressiveness, this is sufficient to act as a foundation for mathematics.
I was taught that it is never embarrassing if you prove something and it get established and afterwards someone else proves it to be wrong, because this is how the whole research process works. You prove with what knowledge is available to you; if it gets established that means smart brains in your time can't prove that wrong, they agree with it. And it is alright. 00:30
Daniele Scotece Pretty sure what he means is that you represent the natural numbers as the set of things that can be produced from peano's successor function and 0. I.e. {0, 0+1, 0+1+1...}. So you define the set by induction. Not sure how to use that efficiently but there's probably some very smart maths for doing it.
great video hope to see more abstract discussions like this for more computerphile videos! or at least a numberphile video on this but more from the other guys' perspective.
Can anybody relate this with typescript, babel and how types(shapes) are identified and are considered to be interpreted into another types. Like replacing a type instance with another equivalent instance which do the work.
Nice explantions but I doubt many will understand, I learned Measure Theory and thus can understand the general outline but those unfamiliar will be lost.
Seems weird not to mention that HTT is not a developed field and the idea of it functioning as a new foundation for mathematics is only a hypothesis, there are many theorems left to prove before one can conclude that the theory is powerful enough to be able to build all mathematics from it like one can do with set theory. Still a really fun prospect for research though.
This video is really abstract to me compared to the other. In some videos the most basic concepts (to me) are explained but this one is really hard to follow, so much stuff is considered to be known and there doesn't seem to be much concrete stuff to relate to. It would have helped if it had been a bit more scripted, may be with examples or something. It's quite hard to get into Dr Thorsten Altenkirch's head for this topic!
The guy Gérard Huet behind Coq also wrote some data structure he called The Zipper. Apparently he said in conference during a demonstration: "Now I'm gonna open the Zipper and show you my Coq".
Source: someone at INRIA who knew Huet told me.
Terrence Zimmermann Now that you can't make up.
Kevin Pacheco I didn't. I assume the guy wasn't trying to troll us when he told the story during a dinner (with mainly scientists and engineers), because well, he wasn't the trolling type. And you can check that Huet actually invented the Zipper (a kind of tree data structure).
Of course, what I said doesn't constitute a proof, but that's all I have.
A Huet Zipper isn't necessarily a tree-like structure, but you can make a Huet Zipper of a tree. A zipper is more or less a structure that gives the notion of a cursor on an underlying structure such that there's O(1) operations on whatever's underneath the cursor, and it has several operations for moving that cursor around said structure.
If you want to see some trolling, try out McBridge's presentation to the Haskell Implementors Meeting, '09 about the Haskell Preprocessor.
+Foagik Thanks a lot for the info, I am undoubtedly uninformed about that kind of structure as I never needed to use one. That seems quite powerful, I've downloaded Huet's paper for further understanding.
Conor McBride, right? I don't seem able to find this presentation, is it on yt?
I've worked with people at the INRIA and that's totally the kind of humour they have! I have no problem believing this story!
I'm starting realize that just because I come here, it doesn't mean I'm smart.
Ксения Ковалевская lolz...
You're wandering into the trolls' dungeon, you might find yourself getting too much attention from those who are smart.
It's just learning the terminology. You are smart.
Zod 'kneel before Zod.' You were awesome.
***** I wish it applied to me in class. I'm horrible at math.
Drink a shot every time he says "ja".
HGich.T - Tutenchamun, he is clearly a fan ;D
Thank you for the advice I'm still laughing!
ja-ger
I cannot unread your comment....
"Das ich soll!"
Mathematicians should wear robes and pointy hats and be interviewed in front of some copper plated bubbling apparatuses (apparati?). Then replace "theorem" by "curse", objects by "demons", and demonstrations by "invocation". "First evoke a local Riemann-integrable demon. the Fundamental Curse of Analysis allows you make sure you can restore that demon when you observe its growth." Cedric Vilani is the headmaster of the Lyon School of Wizardry and professor of Implicit Demons Taming.
Antonin Caors haha, yeah 😌 Master of Demon Invocations
Amandeep Kumar Come now, there's not even any DnD references yet! 😅
what? they are. wait where do you guys live?
New name for mathematics: demons taming
this sounds just so cool.
Didn't know Thor was into maths
Hehehehe...get it? Because his name is Thorsten
lol red
Thors stone...
math*
it's maths, not math, stop omitting words
As a programmer, this makes so much sense. I was in a weird situation where i learned algebra first, and then learned programming (and algorithms and data structures), and then more maths, and i couldn't help but think how i could represent the concepts i learned in computer code, but the notation used in maths was set theory, which was slightly confusing some times. Then i noted to a lecturer that a definition he wrote of something could be expressed as a short recursive function, and turns out he knew (among other things) haskell and he agreed that the recursive notation was correct, but pointed out that you can solve some such functions which may be noted recursively using transformations that allow you to do it as one computation instead of a series of computations (iterative), which can dramatically impact the time they take to run, and he had done work on that. Using maths to speed up some heavy logic functions by transforming them.
Anyways, my big take-away is i'm fairly decent at maths, but i'm a bad calculator, and i strongly prefer computer science notation over set theory notation. If i had been taught maths through programming the concepts, i would have done MUCH better, and it would also have allowed running the functions and (automatically through code) visualizing the output to gain a better "feel" for it.
As a unsuccessful student of mathematics who went to computer science I think, it's because computer scientists care much more about the style of notation and are better trying to find the right language which describes their current problems.
Wow, this is soo true.
But you can also tail call optimize at the compiler level.
John McCarthy, the father of LI
SP, relates how he tried to suggest to mathematicians that functional LI
Mathematical Poofs! I love this guy
He uses the coq system for his poofs!
Shubham Bhushan is this about Alan Touring :D ?
*Alan Giedrojc*
"...Alan Turing"
@@GilesBathgate bun
War is Peace - Freedom is Slavery - Ignorance is Strength.
"A function which doesn't function shouldn't be called a function"
Real-life James Bond Villain - 2017
I will die a happy man when the producers of James Bond finally hire this man to be their evil genius.
false.
Proof theorems ja. samsing wrong wis se pruuf.
Ja?
??
I think this would have been more clear if we had examples of some useful types that propositions and their proofs could take.
I could provide some really simple examples in Idris or Agda, if you're still interested, but it takes a bit of knowledge Haskell-like syntax.
This was more like a history lesson than an explanatory video
Still this is very interesting stuff
@@Bratjuuc are u still willing to provide examples in agda? I'm currently studying this subject and the lack of examples is killing me
@@ivanpiri8982 Oof, that was pretty long ago. I forgot what I wanted to provide as examples back then.
I'm going to need some time revising, but I'm still willing to provide examples. I would prefer to provide them in Idris purely for syntax reasons.
So far I can illustrate the following:
*) Boolean logic.
*) maybe Existential quantifications
I'll probably update the list in a short amount of time.
"coq could upset english speakers" LOL
and they say Germans dont have a sense of humor
cofi 41 French*
i think he meant the speaker, but isnt he like dutch or something?
@@steliostoulis1875 I think that was intentional
false.
The jist of the curry-howard correspondence is:
In a well-typed programming language (such as Coq, Agda, Idris etc.)
Types correspond to theorems
Programs correspond to proofs.
You write the type, then you prove it by writing a body to that function.
??
I recently signed up for CS in school, and math class is doing set notation. A youtube title has never been so relatable.
UA-cam is great at suggesting videos. I am starting college and covering set theory, and I realized that there are multiple encodings for natural numbers and that implementation details were being exposed. And then I find this video which mentions the same thing. Amazing.
Underneath Type Theory there is a more foundational mathematical approach called Category Theory, which seems to unify all areas of mathematics. Unsuprisingly, Types form a Category. Just as there is a connection between Homotopy Theory and Type Theory, there is a connection between eg. Set Theory and Topology Theory via Category Theory and the theorems of one theory correspond to theorems of the other. I suggest you look into that.
So it seems our galactic president Beeblebrox is into type theory.
BTW, great video! Love this more theoretical approach to Computerphile!
He must have painted his seond head Pink, too
Yeah totally agree! Altough I wish he would have written down something
false.
Man, this dude is so intelligent. He just pulls all this theory history out of his head; its awesome.
Thorsten is a great explainer and talks about some really subjects - liked this video!
You guys should also track down Kevin Buzzard. He's a great speaker, interesting guy, and he's recently been working on proof formalisation in a language called Lean. He and students have been writing up a whole bunch of definitions/theorems/proofs, and is planning to do his whole department's curriculum.
That was really fascinating. I have heard of type theory but didn't know much more that it resembles types in programming. He contrasts it to set theory which helps a lot and makes sense. What we seem to have here, as he is saying, a potential, qualitative (of course partial) transformation of a field, mathematics, because new tools, computers, bring new paradigms. Fascinating, thank you so much. I would be happy to see some more of this, and maybe some examples from this style of math.
More vids on Haskell and functional programming would be great!
false.
6:43 or as Mr. Numberphile would call it: The Parker Function.
Need a proof? Bring on the COQ.
cringe
??
I'm pretty sure the only people who understood what he was saying already understood everything beforehand.
Gordon Chin roughly
I had to rewatch it but I think I got it down. Abstract math isn't exactly my forte though.
I think if you have a first year discrete math class or even maybe a high school computer engineering class under your belt, you should be able to follow along
First year Discrete Math couse is enough
Having a degree in CS should be enough to understand what he's saying
We are happy to welcome you mathematicians into the programming world, but there is one thing you need to learn fast and constantly repeat to yourselves if necessary: Single character identifiers are not acceptable! Use full words and clear and consistent naming conventions. Learn it, live it, love it.
Eyyy funny stuff, I just used Coq a month ago for researching a class project. I was writing a paper on using recurrent neural nets to automatically write proofs in a Coq-like language. I was not successful - turns out it's pretty hard to do, lol
I think of mathematical proofs as a mechanism for problem-solving and gaining understanding while producing useful outcomes. Consider machine learning vs. algorithms. Machine learning can develop a function based on a training set that we can use to predict results from other inputs. That's great, but it doesn't necessarily lead to expanding our understanding of the underlying principles, things that we can use to help solve other more general problems.
Surprising as this may seem, Mathematicians seldom bother with sets in practice. In fact they already use mathematical constructions pretty much exactly like how it is described with types.
For example the natural numbers N, defined as a set together with a preferred element 0 and a function s:N ->N that gives the successor (programmers would call it next) which is "universal" with this property (in the precise technical sense below), is enough to define N unique up to _unique_ isomorphism (= bijection), and that is perfectly good enough to work with it and ignore any implementation details and it nicely avoids asking silly questions like whether 2 \in 3 because they are not preserved under these isomorphisms.
Here a set of natural numbers (N, 0 , s) is _universal_ if it has the following property: for any set X, element x \in X and any function T:X->X, there is a unique function f: N->X such that f(0) = x and f(s(n)) = Tf(n) (from which it easily flows: f(n) = T^n(x)). Of course one now has to prove that such a set exists (surprise surprise, it does).
This is still an implementation detail. Type theory and category theory abstract away all of this and just have the type N.
What you're describing is a declarative but iterative way of describing a set, which really isn't any better from my perspective than set builder notation, which is also declarative and how the highest abstraction of iteration is expressed in functional languages.
In fact what you're describing is two levels of abstraction lower than set builder notation which is considered implicit iteration while the iterator you're describing here is external iteration
It will be interesting to see if a type theory is developed in which univalence is a theorem rather than an axiom
Check out the cubical system made at Chalmers university.
For those who don't know, "coq" means "rooster" in french.
Dr Thorsten Altenkirch
At 13:50, you talk about the Univalence Principal, where it is stated that if two things are equivalent, then they are equal.
I have interpreted this as "If two things are logically equivalent, then they have the same properties.
I shall show by example that this is not always the case:
From the premises "forall x (Sx implies Px)", it can be proved that "Exists x (Sx & Px) iff Exists x(Sx)
Using this example, I shall prove that if two logical expressions are equivalent, they don't necessarily have the same properties. In this example the property they don't have in common will be truth value.
Take both Sx and Px to be "x=x". Substitution gives (1):
"forall x (x=x implies x=x)"
can prove
"Exists x (x=x and x=x) iff Exists x (x=x)" (1)
However, by the principal of transposition, it will also be true that (2):
"forall x (not (x=x) implies (not x=x))"
can prove
"Exists x (not x=x) and (not x=x)) iff Exists x (not x=x)" (2)
Example (1) has a truth value of true, however that of (2) is false. (x=x is always true, likewise (not x=x) is always false.)
Thus if two statements are logically equivalent, they do not always have the same properties. Using transposition again, this means that if two statements have the same properties, they are not always the same statement.
I didn't mean "logical equivalence" but logical equivalence is a special case. Indeed all operations on propositions in type theory are "truth functional". I don't understand your counterexample in particular I don't know what the principle of transposition is. I have no idea how you justify the step form (1) to (2).
It turns out I made a mistake in the proof. I found out that I had used a universal specification rule on a negated universal statement. Because it was negated it was existentially quantified and the rule couldn't be used. Sorry about that. (Thanks for the reply though)
Alex Armstrong no problem. And cheers to you for experimenting and dis overnight your own flaw.
I think the guy behind this saw us complain about the 404 video and is now giving us some meatier content. I'm happy. :)
Ja
ja
ok?
Please do video on backpropagation algorithm in neural networks
or linzer computerphile, don't be afraid to get a bit technical!
or linzer 3blue1brown did that video recently
??
I was surprised that he managed to talked about set, type and homotopy theory, abstract mathematics and new foundations of mathematics without mentioning category theory.
Calle Silver-Granhall I think because category theory is more difficult to explain to a general audience than the other three. Also, previous videos have alluded to type theory and set theory.
There is some on the whiteboard on its back...
I doubt it.
Category Theory isn't hard.
Literally every thing he mentioned would have been way easier if he did a quick recap of category theory and what it is. I mean basically everything he talked about was under category theory.
ok?
can someone explain what this math on the whiteboard behind him is about?
It's about his research for formalizing type theory inside type theory. Basically, if lots of math can be expressed in TT, why not try TT itself as well? Since proofs in TT are executable programs, modeling TT inside TT is also a way of doing metaprogramming and proving properties of metaprograms in the same system.
Until a latter day Gödel arrives. (this is a reply to András Kovács's comment below).
Could you talk about the Haskell type system?
Not as rich as COQ, Agda or Idris ones
??
11:32
"It's true that 2 is an element of 3, doesn't really make any sense, right?"
I mean, isn't 3 basically == | | |
And 2 == | |
so there indeed is a *| |* inside 3.
So 2 is indeed an element of the set called 3
So would Type Theory make it possible to compile abstract math into executable code? I ask because that is basically what I dream about.
Yes.
Cool!
Kind of... although in general when you compile type theory it's more along the lines of interpreting it. For example, for a Coq proof to be "proven," all you have to do is compile it and there's nothing left to execute. So in a sense, executing the abstract math happens as your proof gets verified. As a result, it doesn't feature many tools for writing executable code (although it does allow you to "extract" the code into another language that can provide the tools for writing useful programs). You can also use some other languages to compile type theory programs into executable programs (Idris and ATS, for example).
no. you would have to create a proof to have a program (proofs are programs). and proving in Coq is much harder than conventional programming
I know I'm late but props to the one(s) who made the subtitles. Very creative with using the relevant symbols.
Ok, the comments have spoken: "Dr. Thor" it is
Edit: very interesting! Thanks for doing these!
I've always struggled with gaining intuition in abstract mathematical theories because of the set theoretic foundations, (set theoretic definition of topology, for instance)
Wow. Brady's really bringing in the big bucks. He brought in the scientist from Independence Day.
Is Computer Science consider a subfiled of Mathematics? It has its origins rooted in Set Theory, Graph Theory, and other mathematical fields, so wouldn't it be redundant to say "Computer Science union Math" when describing Type Theory?
My bad, "Computer Science intersection Math"
computer science is applied maths
There are subjects in Computer Science unrelated to Mathematics. There are subjects in Mathematics unrelated to Computer Science. The symmetric difference is going to be nonempty.
+Vulcapyro like ?
+htmlguy88 Computer science includes a lot of subjects related to the physical and practical aspects of computing, such as real world efficiency, caching, representation of human language text, issues with human-machine interaction, making the work less error prone etc. Mathematics includes subjects that cannot be accurately represented in computers, such as infinitely precise numbers, infinitely large sets, the entire field of mathematical analysis (differential and integral calculus) etc.
gotta love the category theory diagrams on the back
This guy looks like the scientist from Independence Day!!!
he's my lecturer! top top guy
??
can you make videos explaining type theory and how math ideas can be proven with it?
Formal methods are getting more and more important as systems grow. For things like blockchain processing one approach to proving correctness is Hoare's Communicating Sequential Processes. Then there's Predicate Transformer Semantics etc.. It all gets very complicated.
10:49 he is onto something imo
Aesthetics and genres and psychology and visual mediums paired with endless user data
This guy takes no.3 spot after the Klien bottle and Japanese guy.
No way. Matt and James come before him by a long shot.
Patrick Wienhöft but they are over on numberphile ;)
Patrick Wienhöft
I'm talking on the "looks like science/math" scale. Although I think James could compete with this guy. Oh I forgot the Chemistry guy too!
I was going to say, no one looks more like science than Prof. Sir Martyn Poliakoff.
Well, then Tadashi shouldn't be there. He just looks like an ordinary Japanses.
The chemistry guy obviously belongs in there then, yes ^^
Computers along with there parts be it Monitors, Screens, Keyboards, Mouses, Remotes, Modems, Scanners, Printers, Floppy Discs, SD Cards, CPU's, and other devices can go hand in hand whether they are desktops, laptops, I pads, kindles, smart phones, cell phones, palm pilots, and other electronic devices and can serve well if they do not over heat or run out of storage space.
Is there a technical limitation for the 50fps?
Brix Zigelstein It's because they're Britons and they grew up with PAL.
Brix Zigelstein Yes, it's PAL equipment, and also reduces the risk of light flickering effects when filming in areas with 50 hz mains power.
Top 10 programming languages to learn in 2020: 1. Type theory 2. Type theory 3. (wait for it) Type theory ...
could we have a second video perhaps going more into detail of type theory maybe xsome examples of its uses. pls
What are the axiom of mathematics which can be used to derive entire mathematics?
there isn't just one set of axioms you can use and how you choose changes what you can show.
Kram1032 the most popular i would say are the zermelo fraenkel axioms, which are essentially rules you can follow to built sets
Kram1032 it's weird that i was playing with lean yesterday.
While there are several sets of axioms which you can use as base for most of your math there is actually no set of axioms which
- you can derive all maths from
- is complete (i.e. every true statement can be proven to be true), and
- is free of contradictions (i.e. no statement can be proven true and false).
This is known as Gödel's incompleteness theorem.
The Peano axioms mentioned in the video are probably the easiest to get started with.
However, mathematics wouldn't be what it is if you could not devise your own systems. Bill Gosper famously delivered a proof about polynomial division just by optimising working code.
Besides, mathematics is proven to never be complete. Deriving its entirety is an absurd idea. Some things are simply true, even though they cannot be constructed. DeMorgan's rules, for example, are proven true by exhaustion, but are not deduced.
And some things are neither true nor false. And there is proven to be no end to such things, no matter how many you define axiomatically.
Are there any programming languages that deal heavily with type theory?
If I understand the gist of type theory correctly. C# seems to fit the bill. Everything in C# has a type and types have well defined interactions. Even type casts are validated against the type hierarchies. As much as I love the language for it's strict adherence to types, I also hate it for that reason. It takes a lot of weird but commonly accepted workarounds to make a functioning program from such a strongly typed language.
c# deals with types. but it does not directly deal with type theory. c# is weakly typed because because it is possible to get type errors at runtime. languages like haskell are strongly typed but they don't deal completely with type theory. languages like Coq and Agda deal directly with type theory
Nathan Dehnel Haskell or Agda are the language you are looking for.
I second Haskell, check out Lambda calculus before you jump in tho
@FichDich InDemArschInvalidCastException comes to mind (you can write code that casts an object to anything). NullReferenceException also comes to mind, although that is more of a design flaw where strings and objects are always nullable, that i last heard they would be fixing. it's not as bad as dynamic languages like JavaScript though. i use .NET languages like C# regularly
This is exciting.
Just finished Group Theory and this seems like something I can learn from a text book now.
Set theory has the concept of universal set on which a choice function to extract a particular set can be applied....
Like moving from from everything towards being specific.
Does universal set include everything that might be constructed?
I just watched this video and to my understanding using HOTT you would be able to show that in fact two different proofs have the same outcome and thus are the same.
So you would be able to first create a proof and then verify that in fact it is an improvement (more elegant, easier, whatever) to an already existing older proof... ?
Or, the other way around, you would be able to show that a proof is really different from another proof...?
We are talking about a type of logic that has to be necessarily less expressive than First Order Logic (that is used for mathematical proofs) otherwise computers cannot help that much.
insidioso The trick is that constructive mathematics works well with the termination requirements for programs and physical computation. To derive equivalence, you simply have to inhabit the type representing the equivalence. The first order logic is derivable from the theory, which is something set theory cannot do.
No, ZFC itself can be modeled inside Homotopy TT, for example.
isn't Formal Programming a way of proofing mathematical theorems?
This is awesome, my mind is blown and I feel much better
I have been studying formal mathematics for my Computer Science thesis for a little while now. And looking back on it, I think Set Theory has its priority backwards. Set Theory tries to take two Sets of elements, and tries to evaluate whether a function definition exists as a pathway between these two sets. Type Theory takes a set and a function definition, and uses it to generate elements of another set, which can be used as proofs for the other set.
How I understand it is that: Set Theory is like a funky subset of Type Theory because Set Theory deals with concrete definitions generated from abstract ideas, whereas Type Theory works the other way around. In a way, this is more useful and powerful than Set Theory because you can easily create various definitions of a set of elements that would otherwise be hard to do using Set Theory. It's not impossible. It's just hard...
I actually once used 2 (element) 3 in a Discrete Maths homework where we couldn't use
2 as an element of 3 doesn't seem silly to me.
I like listening to Dr Thor here
But I'm gonna be honest, I got lost about 5 minutes in :(
DeoMachina I was lost in his "ja"s.
His name is Dr. Altenkirch. He's not nordic either.
Mark, that's because the implementation and representation don't matter. As Dr Thor said, you have 1+1+... or binary or decimal... It also doesn't matter what shape the symbols are.
I was surprised to find out that the Predators use decimal system the same way as we do, just that symbols for 0, 1, ..., 9 are those funny patterns.
haha "Dr Thor", really like it, esecially when i read its an abbeviation of his real name..
ok?
I like to think that I know quite a lot about both mathematics and computer science, but this goes way over my head
I will die a happy man when the producers of James Bond finally hire this man to be their evil genius.
Spelling out a rational plan that works in the end :)
11:35 I'd say that DOES make sense. In order for a set of 3 things to exist, a set of two has to exist in it, and so on. I think pure abstraction in mathematics is a covert Form of idealism. So the definition that you said makes no sense, seems to me that does makes sense if you have a materialist view of the world by wich Mathe matics are done ON the world and not outside of it. This definition in set theory makes sense because numbers are based on perception, by which we can assimilate to distinct objects as Bein the same thing (equality) and grouping them together, which is counting them.
The biggest problem will be when dealing with infinities, other than memory constraints some of the concepts become quite abstract.
It's not actually that much of a problem since we can use codata. Consider, for example, if we wanted to represent an infinite sequence of binary digits. We can do this by defining a function from the natural numbers onto the booleans, so that if we want the nth digit, we just issue it into the function defining the sequence. Since the function's algorithm is finite, we can represent the full infinite sequence using finite data.
Generally speaking, so long as we can define something in finite terms (which is everything we can define in practice, including the concept of infinity itself), we can represent it on a computer using those exact terms.
I did not find the other part of the video on numberphile maybe someone can pass me the link
For me, it always been that the sign ∩ was turned 90 degrees anticlockwise in this statement. :)
Excellent video on a relatively new field of study with a real future!
If it is unable to distinguish between different representations of numbers, is it impossible to do boolean algebra? Is it possible to calculate the result of 42 AND 69 for instance without representing the numbers in binary?
A function that does not function should not be called a "function". Makes sense on so many levels!
What about whitehead and russell? I guess they came up with such an idea way earlier...
And the problem with a constructibility is that very important theorems couldn't be proven f.e. that every vectorspace has a basis or if you like non-standard analysis or even normal analysis you would loose many theorems that are quite crucial.
TIL a function that does not function is not a function.
Did I get this correctly?
- Set theory is like a declarative programming language and a set may not be enumerable or infinite (termination problem);
- Type theory is more constructive much like an imperative programming language.
No, if anything, it's the opposite, as type theories tend to be functional languages.
Set theory isn't like programming at all. It's a completely static logical system which, by default, has no computational interpretation.
Type theory exists so that a computer can internalize the semantics of what a program is meant to do. If you want a program to sort a list, you can use type theory to define what it means to sort a list, then a type checker verifies that the program meets that specification within that type. What's significant is that, at the extreme ends of expressiveness, this is sufficient to act as a foundation for mathematics.
So this may be way off, but is SFINAE (in C++ templates) for type deduction basically using the same relationship to encode logical statements?
Could this be used to assist with the verification of the proof to the ABC conjecture?
I was taught that it is never embarrassing if you prove something and it get established and afterwards someone else proves it to be wrong, because this is how the whole research process works. You prove with what knowledge is available to you; if it gets established that means smart brains in your time can't prove that wrong, they agree with it. And it is alright. 00:30
It is as embarrassing as designing a phone which goes up in flames.
Where does category theory sit here in relation to type theory and rest?
Another freshman university student who thinks he's a genius because he knows CT. Lol.
This makes me feel guilty about not having read TAPL for two weeks...
I'm a Swedish mathematician as well.
Peano arithmetic is a way of defining natural numbers via induction, I don't understand how it can be use to encode the natural set.
Daniele Scotece Pretty sure what he means is that you represent the natural numbers as the set of things that can be produced from peano's successor function and 0. I.e. {0, 0+1, 0+1+1...}.
So you define the set by induction. Not sure how to use that efficiently but there's probably some very smart maths for doing it.
Oh I see that now, thanks sir, always a pleasure to see thoughtful comments on yt!
Have a nice day
Computer Science ∩ Mathematics = Computer Science
How would you check the infinitude of number of primes via computing??
such a great thumbnail
great video hope to see more abstract discussions like this for more computerphile videos! or at least a numberphile video on this but more from the other guys' perspective.
Can anybody relate this with typescript, babel and how types(shapes) are identified and are considered to be interpreted into another types. Like replacing a type instance with another equivalent instance which do the work.
Where are the helpful pictures :(
6:43 Parker Function
Is it wrong that I love the sound of this man's voice more than what he's talking about?
I super enjoyed this video
Nice explantions but I doubt many will understand, I learned Measure Theory and thus can understand the general outline but those unfamiliar will be lost.
Type vs set transcends "has a" and "is a" (in no particular order but maybe I'm off target..?)
Seems weird not to mention that HTT is not a developed field and the idea of it functioning as a new foundation for mathematics is only a hypothesis, there are many theorems left to prove before one can conclude that the theory is powerful enough to be able to build all mathematics from it like one can do with set theory. Still a really fun prospect for research though.
This video is really abstract to me compared to the other. In some videos the most basic concepts (to me) are explained but this one is really hard to follow, so much stuff is considered to be known and there doesn't seem to be much concrete stuff to relate to. It would have helped if it had been a bit more scripted, may be with examples or something. It's quite hard to get into Dr Thorsten Altenkirch's head for this topic!
The entire topic is abstract and you'd basically be learning math from scratch to understand it.
ok?
"That is getting abstract."
lol ikr how you gonna say this when already talking about type theory xD
COQ magic :D
Can't be misinterpreted at all, eh?
The Gathering?
??
12:23 abstract secrets from which originate powers of the S U C C
Running out of witty letters to identify our mathematical inventions. Honestly ,"Coq" was proofComplete taken?