This is one of the best talks I've ever watched. If I could just encourage relatively young but hungry programmers to watch one talk, it might be this one. It led to me firing off about 4 different emails just while watching it, all of which I hope have some small impact on people.
can confirm. at about 13:17 and just the peripheral profundity I've sorta wandered into surrounding the concepts he's explaining alone is of immense value. for some reason his delivery is just particularly effective for me, although I must admit that my mind was somewhat primed for that sort of thing. all that being said, i hope that your experience is of comparable depth and relevance, and also yes hello I am on the internet pushing buttons on the computer rapidly to do things
also I genuinely look forward to the next ~30 minutes left of the video. (not un)fortunately youtibe mobile halts the video while posting comments. must be that halting problem I keep hearing about
What a great conference. It's nice to walk that anthropologic path of the people and the context things were discovered. This gives better insights on how the problem evolved and the abstractions need it. Thanks for this!!
This is perhaps my favorite talk about the Curry-Howard part of the Curry-Howard-Lambek correspondence. Phil also has talks about the other part (category theory), but there's so much to dive into here - Kleisli and Eilenberg-Moore categories, Lawvere theories etc., but most of all the general idea that things can be fully described by the relational structure they exhibit to the rest of the "universe". Fortunately, category theory has by now fully arrived in the FP world (~25 years from the discovery of Monads in FP?) - and there are many great talks on the subject. Dependent Types are still a fringe phenomenon in actual programming (path-dependent types in Scala and Typescript are probably the closest we've gotten in languages that are at least somewhat widely used in the the industry). Linear Logic and linear types have gotten some traction in certain semgents of the programming world, but to my knowledge, nothing concrete has manifested in productive programming languges. (Though please feel to correct me).
This is a great talk. And the best part of it is the Q&A at the end. Very informative presentation and even more thoughtful and great questions. I'll certainly come back and watch it again. Thanks Dr. Wadler!
"[The core of Functional Languages] is not arbitrary. Their core is something that was written down once by a logician, and once by a computer scientist. That is, it was not invented, but discovered. Most of you use programming languages that are invented, and you can tell, can't you. So this is my invitation to you to use programming languages that are discovered."
My interpretation is that he adopts the word "invented" for things that have an ad hoc nature, rather than something that later proves to be fundamental.
Wadler leaves out one critical consideration with his superficial criticism of the computer industry for not programming in languages built from lambda calculus. Cost, both in terms of computational complexity and in programmer's time. He brings up the example of garbage collection to prove his point that industry is too slow (stupid?) at adopting good ideas. The reason it wasn't adopted earlier is because it comes with a rather high cost in computing time and space that older machines couldn't handle. The reason it's adopted now is because the machines are faster and so capable of performing garbage collection in a reasonable amount of time, and since it saves programmers the from the work of coding release of memory by hand in all their work, it's now very cost-effective.
It's more nuanced than that. Lisp has had garbage collection and high performance for at least 40 years. Gabriel's "Worse is Better" is one explanation of why things are the way they are.
Garbage collection is simply a technique to get around lazy programmers by wasting endless machine cycles. I have never needed a gc in my entire life. My local variables are local, my global ones are global and if I have a few global ones too many, then simply because it doesn't matter.
@@lepidoptera9337 C does not "need" it - in fact, it would harm it - because one of its most important use cases is the fine control of memory by the programmer, which is very important for low-level systems programming. For instance, the "pointer arithmetic" feature only makes sense in a language without automatic garbage collection, where you can arbitrarily access any memory address you want. This huge power comes at the price of dangling pointers, buffer overflows and specific security issues. Yes, it is tremendously powerful, even to accidentally shoot yourself in the foot. But it is a power that is a basic requirement for low-level systems programming. For most scenarios of application programming, a "managed memory" language, with managed references rather than pointers, is more programmer-friendly, as it frees the programmer from having to think about low-level memory issues and eliminates an important class of defects/"bugs" that could arise by subtle programming errors.
A possible solution to the "Independence Day" computer virus plot device: The scientists at Area-51 had 50 years to learn and play with the computer of the crashed aliens spaceship, so they developed a framework to interact with it. So, the code that we see in the movie is a DSL executed against that framework, which in turn translates the intended semantics to the alien computing system.
"3 things were proven at the same time, that's powerful evidence that mathematics is discovered, not invented" -- We have separate simultaneous/concurrent inventions all the time. It is very common that e.g. two or multiple instrument manufacturers arrive at similar designs or solutions to technical problems. It is no wonder that this occurs. The level of development depends at least in large part on the development of society more broadly. It is the same in mathematics -- it is always linked to technical, social, economic, etc. circumstances.
Is it basically: the proposition "A is true" is equivalent to a world of type A being real or something? And "x is of type A" is equivalent to the proposition "A is true for some x"?
It's more like `x: A` is a proof of A, i.e., a witness of the truthness of `A`. So, for instance, with such a witness `x` for `A`, and `y` for `B`, you can prove `A AND B` by putting these two witnesses together: `(x, y)`
Proposition A is true either because it is an axiom (a choice we make) or because it derives from axioms trough a finite number of logical operations. In general one can prove that one can't prove if a proposition is true, or not. That's Goedel in a single sentence (or two).
I'm not sure how the lambda calculus would model some features of reality like the fact multiverses are causally disconnected from one another, or the fundamental limits to reductionist models implied by chaos theory. I'd love to see anything addressing those topics, though! Also, does anyone know if the 'linear logic' he mentions is related to the Pi calculus?
I've gone back and forth in my mind about whether pi-calculus and linear types are related. I think there is a relationship, but not an exact equivalence. Pi-calculus says that you send your resources away and can't use them again until you get them back, but linear types can also encode that you *have* to have used up a resource at some point in the program. As far as I can tell, that has no analogue in pi-calculus. We can simulate chaotic phenomena using computers (if inaccurately) so we can do the same thing with lambda calculus. Causal separation is also easy to model: simply construct a list or set of universes and map pure functions over them, to describe applying physics without side effects in other universes.
This is one of the best talks I've ever watched. If I could just encourage relatively young but hungry programmers to watch one talk, it might be this one. It led to me firing off about 4 different emails just while watching it, all of which I hope have some small impact on people.
can confirm. at about 13:17 and just the peripheral profundity I've sorta wandered into surrounding the concepts he's explaining alone is of immense value. for some reason his delivery is just particularly effective for me, although I must admit that my mind was somewhat primed for that sort of thing. all that being said, i hope that your experience is of comparable depth and relevance, and also yes hello I am on the internet pushing buttons on the computer rapidly to do things
also I genuinely look forward to the next ~30 minutes left of the video. (not un)fortunately youtibe mobile halts the video while posting comments. must be that halting problem I keep hearing about
propositions as types, proofs as programs, simplification of proofs as evaluation of programs. very cool stuff.
Mind expanding, plus he has a cape too.
Wadler, Philip - "you don't put science in your name if you're real science"
What a great conference. It's nice to walk that anthropologic path of the people and the context things were discovered. This gives better insights on how the problem evolved and the abstractions need it. Thanks for this!!
This is perhaps my favorite talk about the Curry-Howard part of the Curry-Howard-Lambek correspondence. Phil also has talks about the other part (category theory), but there's so much to dive into here - Kleisli and Eilenberg-Moore categories, Lawvere theories etc., but most of all the general idea that things can be fully described by the relational structure they exhibit to the rest of the "universe".
Fortunately, category theory has by now fully arrived in the FP world (~25 years from the discovery of Monads in FP?) - and there are many great talks on the subject.
Dependent Types are still a fringe phenomenon in actual programming (path-dependent types in Scala and Typescript are probably the closest we've gotten in languages that are at least somewhat widely used in the the industry). Linear Logic and linear types have gotten some traction in certain semgents of the programming world, but to my knowledge, nothing concrete has manifested in productive programming languges. (Though please feel to correct me).
Not linear types, but affine types have found use in Rust
This is a great talk. And the best part of it is the Q&A at the end. Very informative presentation and even more thoughtful and great questions. I'll certainly come back and watch it again. Thanks Dr. Wadler!
Last question is the best. Definitely my new favorite talk.
The best explanations of proof, type, and history of algorithms! Seems a category? Yes I guess. Thanks a million man.
Worth the time, definitely.
33:55 For a moment I thought he will say "Thank you very much, and I will now take off" :)
Amazing lecture! So many important ideas and discoveries shared in less than a hour! Thank you Philip!
"[The core of Functional Languages] is not arbitrary. Their core is something that was written down once by a logician, and once by a computer scientist. That is, it was not invented, but discovered. Most of you use programming languages that are invented, and you can tell, can't you. So this is my invitation to you to use programming languages that are discovered."
like which languages ?
oh.. got it after watching the video..
My interpretation is that he adopts the word "invented" for things that have an ad hoc nature, rather than something that later proves to be fundamental.
One of the best talks on anything.
Amazing talk. Very informative. Thanks for who discovered all this beatiful things in CS.
That was an amazing lecture. very insightful!
Wadler leaves out one critical consideration with his superficial criticism of the computer industry for not programming in languages built from lambda calculus. Cost, both in terms of computational complexity and in programmer's time.
He brings up the example of garbage collection to prove his point that industry is too slow (stupid?) at adopting good ideas. The reason it wasn't adopted earlier is because it comes with a rather high cost in computing time and space that older machines couldn't handle. The reason it's adopted now is because the machines are faster and so capable of performing garbage collection in a reasonable amount of time, and since it saves programmers the from the work of coding release of memory by hand in all their work, it's now very cost-effective.
It's more nuanced than that. Lisp has had garbage collection and high performance for at least 40 years. Gabriel's "Worse is Better" is one explanation of why things are the way they are.
Garbage collection is simply a technique to get around lazy programmers by wasting endless machine cycles. I have never needed a gc in my entire life. My local variables are local, my global ones are global and if I have a few global ones too many, then simply because it doesn't matter.
@@davidnmfarrell Lisp had garbage collection because it needed it. Rational languages like C don't have it because they don't need it.
@@lepidoptera9337 C does not "need" it - in fact, it would harm it - because one of its most important use cases is the fine control of memory by the programmer, which is very important for low-level systems programming. For instance, the "pointer arithmetic" feature only makes sense in a language without automatic garbage collection, where you can arbitrarily access any memory address you want.
This huge power comes at the price of dangling pointers, buffer overflows and specific security issues. Yes, it is tremendously powerful, even to accidentally shoot yourself in the foot. But it is a power that is a basic requirement for low-level systems programming.
For most scenarios of application programming, a "managed memory" language, with managed references rather than pointers, is more programmer-friendly, as it frees the programmer from having to think about low-level memory issues and eliminates an important class of defects/"bugs" that could arise by subtle programming errors.
A possible solution to the "Independence Day" computer virus plot device: The scientists at Area-51 had 50 years to learn and play with the computer of the crashed aliens spaceship, so they developed a framework to interact with it. So, the code that we see in the movie is a DSL executed against that framework, which in turn translates the intended semantics to the alien computing system.
Regarding the name "Computer Science," Peter Naur (The 'N' in BNF syntax specification) called it "Datalogy" - the study of data.
Brilliant!
What a great talk!
epic !!!
I am.looking forward to understanding verbal and real.propositions in logic
Prepare to think; it's not "dumbed" down and the "motivation" is only done in passing.
"3 things were proven at the same time, that's powerful evidence that mathematics is discovered, not invented" -- We have separate simultaneous/concurrent inventions all the time. It is very common that e.g. two or multiple instrument manufacturers arrive at similar designs or solutions to technical problems. It is no wonder that this occurs. The level of development depends at least in large part on the development of society more broadly. It is the same in mathematics -- it is always linked to technical, social, economic, etc. circumstances.
33:30 moment of the talk
WOW
"Informatics" is a way cooler sounding name than "Computer Science", I've also heard it called "Experimental Epistemology" lol
In French, "computer science" is translated to "informatique" (informatics)
Is it basically: the proposition "A is true" is equivalent to a world of type A being real or something? And "x is of type A" is equivalent to the proposition "A is true for some x"?
It's more like `x: A` is a proof of A, i.e., a witness of the truthness of `A`. So, for instance, with such a witness `x` for `A`, and `y` for `B`, you can prove `A AND B` by putting these two witnesses together: `(x, y)`
Proposition A is true either because it is an axiom (a choice we make) or because it derives from axioms trough a finite number of logical operations. In general one can prove that one can't prove if a proposition is true, or not. That's Goedel in a single sentence (or two).
You must assume A and B = 1 and this assumption does not discharge so you have proved nothing if A and/or B = 0
I'm not sure how the lambda calculus would model some features of reality like the fact multiverses are causally disconnected from one another, or the fundamental limits to reductionist models implied by chaos theory. I'd love to see anything addressing those topics, though! Also, does anyone know if the 'linear logic' he mentions is related to the Pi calculus?
I've gone back and forth in my mind about whether pi-calculus and linear types are related. I think there is a relationship, but not an exact equivalence. Pi-calculus says that you send your resources away and can't use them again until you get them back, but linear types can also encode that you *have* to have used up a resource at some point in the program. As far as I can tell, that has no analogue in pi-calculus.
We can simulate chaotic phenomena using computers (if inaccurately) so we can do the same thing with lambda calculus. Causal separation is also easy to model: simply construct a list or set of universes and map pure functions over them, to describe applying physics without side effects in other universes.