he is talking in an interesting way but doesn't really say anything - what does it mean when you turn off your computer your program is still running, we can all go home? :)
Nice explanation of Church Numerals. I remember a schoolteacher asking us ‘what is 2? Can you you explain it?’ Or words to that effect. At the age of 11 I was stumped. It would be some years before I entered Alonzo’s Church.
I'm not a fan of this talk, but working through the Structure and Interpretation of Computer Programs gets a big thumbs up! That book and course opened my mind in a way nothing else had.
Everybody should read at least a part of "Structure and Interpretation of Computer Programs". It is enlightening. Or try learning Ocaml, or if you are a bit more brave Haskell (I don't like syntax too much). It is eye opening when you do it first time.
Haskell syntax is kind of nice. Saves you some parenthesis(looking at you, Lisp) But "Functor", "Monad", "fmap"? Just name it "Mappable", "Joinable", "map"(make this List-only function generic as "fmap" is).
This talk makes me appreciate JavaScript (in the form of TypeScript of course, as a sane developer) even more. So many of these things are just built into the language that you bring up these concepts that sound big and complicated and then I go "ohhhh, it's just an arrow function" or "ohhhh it's just a JS object" and it simplifies a whole bunch. Even typing of lambdas is easy, because the type of an arrow function in TS is (type_of_first_param, type_of_second_param) => return_type. TS is really an underappreciated language as just a general programming language away from the web frontend!
Technically, in `square(x): x * x` all of the open bindings (free/unbound variables) are closed by the lexical environment, by virtue of being a vacuous statement. It might make sense to distinguish a wholly-self-contained closed expression, and one that incorporates various levels, but strictly speaking, I don’t see the value in limiting “closure” to cases where the enclosed lexical environment is empty.
@@silvianaamethyst Nah, we should put them out of their misery. Imagine not only being disabled, but having to look at comic sans to read anything. I'd rather be dead.
I think sometimes maybe nobody actually knows what closures or lambda is, like Taoism. If you actually did know what it is, you could describe it concretely and easily rather than giving definitions which mean nothing. People trying to define a closure would also define a tree as 'part of a forest' rather than 'big tall solid thing made out of wood and branches with leaves on'
If you want deeper understanding, watch MIT SICP lectures from Sussman and Abelson. It's long, but even the few first hours are litterally "Wizardry". The book in itself is a treasure (Structure and interpretation of computer programs, that's the blue book in the middle of those slides).
Related to the final part: David Beazley did the lamda implementation of logic and numbers etc. using Python in ua-cam.com/video/pkCLMl0e_0k/v-deo.html
As an engineer who occasionally writes C++ to help me do my work, I have one question, "What are the actual benefits of lambda functions in actually getting something done?"
fwiw, for me, it made passing functors for use with std::thread and co. easier (to read) than using things like std::bind. actually, it made passing functors to anything that need them easier in general.
The most useful property of them is probably, that you can parameterize algorithms. Some examples: std::sort(myVey, [](auto a, auto b){ return a < b; }); Here the parameterization is of course uninteresting, since std::sort by default uses operator 1000; }); But you can also further parameterize that: auto someValues = filtered(myVev, [legalAge](const auto& value) { return value.age > legalAge; }); So here the benefit is, that you don't need to implement arbitrary operators or write different filter functions. You can parameterize existing algorithms right there without defining something out of band. This makes code shorter, simpler, easier to read and maybe even faster.
@@deepbluev7.x813 Thanks. I guess because I use C++ to do procedural data processing (probably incorrectly as use singletons to wrap up data and functionality into classes), I don't really use short functions, i.e. squaring a number. But will look out for opportunities to use them as you suggest.
@@axelBr1 They tend to be sneaky little ninjas. Once you used them a few times, you tend to find more and more places where you can use them. For example to initialize a const or static multimember struct.
43:28 you CAN in fact name the type of a lambda expression in C++, just not in the traditional C-way. ie: [] () -> float { return 1; } which returns a float with value 1. Even works with specific types where you implicitly call the constructor with an initializer list; [] () -> MyType { return {1, 2, "foo"; } would construct an instance of type MyType with parameters 1, 2 and "foo".
(at 54:59 ) There is no reason to have 'false' = \a b -> b . You can swap them and nothing changes as long as rest of definitions are consistent (e.g. 'false' = \a b -> a, 'and' = \f g -> f f g, 'or' = \f g -> f g g).
54:54 As before with the _2 name for "apply twice", the names "true" and "false" are misleading here. What this actually is is ifTrue and ifFalse and it's called on a boolean value or expression. (x < 4) ifTrue: [x++] or in others: y := (x > 4) ifTrue: [x * 2] ifFalse:[x + 7] You wouldn't want to write (x < 4) true: [x * 2] That would be confusing af.
@@PaulPaulPaulson not sure I understand where that syntax coming from. I think you can imagine that Church encoding defines data in a structure that is useful in future operations on them. In particular: add = (a, b) => n => z => a(n)(b(n)(z)) mul = (a, b) => n => z => a(b(n))(z) For booleans it is even easier to imagine since they explicitly represent branching: nonzero = a => a(_ => f => t => t)(f => t => f) // from natural to boolean (first argument corresponds to false) gt = (a, b) => nonzero(leftover(a, b)) Thus after providing values of both branches to boolean you collapse it to one or another branch: y = x => gt(x, _4)(add(x, _7))(mul(x, _2)) Full example: tinyurl.com/qv9u6k4
In fact the false being equal to zero really IS arbitrary and just a matter of convention and convenience. It depends on how you use it. You could switch the definition of truth and false, making true equivalent to zero, as long as all your if-then-else statements become if-else-then.
It can actually make sense if you consider you may fail in many ways doing a thing but there's only one way to get it right. (Eg. returning "0" by convention from a C/C++ main program if everything went OK but a number unequal to "0" to indicate what failed).
I have been thinking of looking at some functional programming languages like Haskell, and from the descriptions and talks it always seemed to me that the language was doing things in a very similar way to some of the spreadsheets I've cooked up in Open Office. And then I see this at 14:15, and yup.
The presenter missed some other weird notes: Currying, if named by origin, should be called "Schönfinkeling". (See his paper "On the building blocks of mathematical logic" reprinted and translated in _From Frege to Gödel_.) Also, as for true and false - in other work in the foundations of mathematics from the 1920s and 1930s, true "was" 1 and false "was" 2. (In the work of Gödel, for example, if I remember correctly.)
When you don't add names to types and variables, the maintenance programmer will kill you after they had an allnighter of Unnamed doing Unnamed on Unnamed and passing it to Unnamed. You should not look to the Smurfs as your source of reliability.
So much this. I recently had to debug an issue in a class that was a giant pile of nested anonymous classes and functions, and it was very difficult to follow, even with the debugger. When it works, it's kind of cool and magical, but when it doesn't... well... it's hard to find the problems.
6:15 “[Functional programming] is so pure, that even if you unplug your computer, your functional programs will still run. Oh wait, that doesn't work. Yeah, they're riddled with side effects. All of this stuff is an illusion.” Every single function takes some amount of processing time as an input before it can decide on the appropriate output, but I've never seen a useful way to acknowledge that. At least not in the way in which we're more and more often starting to do when composing functions with other sorts of expected effects. Even Haskell includes ‘never returning’ as a valid output of any type of function. Is it ever possible to statically assert that the runtime of an algorithm stays within a certain ‘big O notation’ with regards to (some of) its inputs, or is that what the halting problem says no to?
Interesting question. I'm sure what you're describing here can be reduced to the Halting Problem, since there's no way of knowing if the execution time is just dependant on an extremely large constant. O(n) and O(n^n) could take an equally long time on the given input if the constants for the O(n) solution are large enough. But just like the Halting Problem itself this only applies when we look at things in a very general sense. For example, determining that the function int retX() { return 1; } halts isn't really very difficult at all. We can also very easily determine the time to execute this as being constant. We have something called formal verification tools. You can have a look at Dafny. And these tools allow you to set up some information about your code, and the verification tool will prove whether the facts about the code hold or not. And as per the halting problem this cannot be done in general. But a lot of the time we can do it on specific code, by reducing it to SAT solving and other problems that are possible, even if they are hard. Now I don't think these formal verification tools currently do O verification but similar techniques could definitely be applied to verify a function's O time or space complexity. I never get told about replies to UA-cam comments, if you'd like to discuss this further, you can email me casperes1996@me.com I am having an exam on Computability and Logic on Saturday so I'm quite in the weeds of all of this at the moment ;)
45:51 - "it's a big old mess" -- yeah, that's Java in a nutshell, right there. ;) ;) edit: 46:04 - "there's a lot of duplication here" -- oh yeah, that too. :)
Right, I couldn’t help but notice that, too-it’s somewhat rare to get these kind of spot-on pronunciations of German words from English‐speakers. His name is Kevlin though.....
*sigh* Perl is only executable line noise if you choose to make it as such. It's just the culture of people who thinks it's cool to golf everything always crossed with Perl's ability to actually be golfed down to small sizes that has contributed to some of the most hideous examples of coding line noise. IMO, the extreme readability of Raku (née Perl 6) code is evidence of it. At its core, you can write many things virtually identically but most folk who write Raku code (if you peruse the ecosystem) are stunningly easy to read and follow, even without an IDE doing syntax highlighting.
Actually, eigenvalues and wavelengths from physics are one and the same. The problem of eigenvalues arrised from algebraic treatment of partial differential equations, most notably these of waves. The same later applied to Schrödinger's equation, which is equivalent to Hamiltonian approach to QM. Again, lambda. And in its original form there lambda was a energy of a state, which is directly and explicitly related to wavelength. That is why also we call "set of all eigenvalues" to be a "spectrum". Just like the light has spectrum. One and the same.
44:10 only reason it doesn't like that is it doesn't know the type of x. Use explicit typing and the compiler will accept it. Action nop = ()=>{}; nop();
in every presentation, He has the same thing to say "it was done before" and then goes on to show some 60s or 70s stuff to back his claim even if it does not. search youtube for all his video same rhetoric you would hear!
Weird you say this on this talk though since this was different to most of his other ones. He didn't even mention the Singleton whiskey! - I love him though. Big fan.
I thought he meant that in Excel you can just do =A1*A1 in a cell and get a result. That's a lambda since you're not actually naming a function elsewhere and then calling that in the cell
52:00 _2 is not a good name, because it's not the number, it's an "apply twice" and should be named something like apply2Times. Only if you bind it to the increment function and a start value of 0 it becomes an integer two. _int2 = () => apply2Times(n => n +1)(0) This of course limits it to integers, but you can easily avoid that by adding an argument for the number type. The number type would be an interface which provides the zero-element and the increment operation. _2 = (numberType) => apply2Times(n => numberType.increment(n))(numberType.zeroElement) _int2 = () => _2(int) _double2 = () => _2(double)
That's a "beauty". Church numerals "have no meaning", but only expression structure. Once you define terms you can collapse to "value" encoded in it. as_int = a => a(x => x + 1)(0) as_float = a => a(x => x + 1.0)(1.0) as_bool = a => a(_ => true)(false) // 'add' becomes 'or', 'mul' becomes 'and', anything beside '_0' becomes 'true' as_roman = a => a(x = x + "I")("")
In pure lambda calculus, there are no numbers. There are only lambdas. So if you want to represent numbers in pure lambda calculus, you have to represent them by lambdas. Lambdas that are built from nothing but lambdas. JavaScript isn't lambda-calculus, but as it has all ingredients that lambda calculus requires, the talk can use it to show how this would be done. You're taking JS's built-in numbers too literal. (Pun intended.) In the talk, they're only used for illustration. Assume a JS variant that doesn't have them. You'd have to re-build everything from first principles, i.e. (here) from lambdas.* "apply twice" would be one of the simplest representations of the concept "two" that you'd be able to construct. If it's the closest thing to "two" that you can ever get, you might as well call it "2". Except that JS identifier naming rules don't allow a non-literal to be named "2", so you name it "_2". *By the way, lambda calculus isn't the only thing from which you can pretend that "numbers don't exist, but let's (re-)invent them". You can for example also do this from the axioms of set theory. Set theory only cares about sets. It doesn't know about "numbers". But it turns out, you can construct sets of sets that perfectly represent numbers. For non-negative integers, you could do this: 0 := {} 1 := {{}} 2 := {{{}}} 3 := {{{{}}}} ⋮ or 0 := {} 1 := {{}} 2 := {{}, {}} 3 := {{}, {}, {}} ⋮ or 0 := {} 1 := {0} = {{}} 2 := {0, 1} = {{}, {{}}} 3 := {0, 1, 2} = {{}, {{}}, {{}, {{}}}} ⋮ Now obviously, {{{{}}}} ≠ {{}, {}, {}} ≠ {{}, {{}}, {{}, {{}}}}, so how can they all be 3? They're different ways to represent numbers using just sets, and cannot be mixed. Just like "3", "III", "three", "drei", "trois", "tri" are different ways to represent that number. You have pick one convention of representation / know which convention is used, or there will be confusion. Likewise, _2 as defined in the talk (the "apply twice" thingy) and JavaScript's build-in 2 are not the same thing, but can both be used to represent the more abstract concept of two-ness.
Perl's line noise? Horse hockey! If you can't write readable Perlish code, you need a strong talking to. I mean, this is clear as crystal. . E.g., . for $bob ( 'fnord', 'slack' ){ . $multiply = sub{ (pop)* 2 } if $bob eq 'fnord'; . $multiply = sub{ (pop) * 3 } if $bob eq 'slack'; . print $multiply->(21), " " }
Oh god functional programmers are the worst it's like maths nerds and computer science nerds had babies. I just tell them if you're not allowed side effects how do you ever do anything useful. Check mate.
It's the most well-organized popular shell language. It has the symbolic power of Microsoft against it, and it suffers from some verbosity perhaps (subjective, of course), but in itself it is a solid and creative re-imagining of what it takes to be shell language.
Kevlin Henney always has some of the most interesting talks about nothing.
the Seinfeld of tech talks
@Khalil Aydin no you didn’t. Shut up
@@DeepDuh lol
he is talking in an interesting way but doesn't really say anything - what does it mean when you turn off your computer your program is still running, we can all go home? :)
Exactly! Blah blah blah, get the hell on with it and please stop with your corny programming dad jokes
In a way, Kevlin crafts the introduction, the motivating example, the background, and the related work for most PL research papers.
Nice explanation of Church Numerals. I remember a schoolteacher asking us ‘what is 2? Can you you explain it?’ Or words to that effect. At the age of 11 I was stumped. It would be some years before I entered Alonzo’s Church.
I loved the quote: "Physicists don't use namespaces", epic quote.
"Silence of the Lambdas", good grief that cracked me up.
I'm not a fan of this talk, but working through the Structure and Interpretation of Computer Programs gets a big thumbs up! That book and course opened my mind in a way nothing else had.
Everybody should read at least a part of "Structure and Interpretation of Computer Programs". It is enlightening. Or try learning Ocaml, or if you are a bit more brave Haskell (I don't like syntax too much). It is eye opening when you do it first time.
Or category theory! (that's the one I'm hooked on now) Anyhow the Curry-Howard-Lambek Isomorphism shows that all these things are the same!!
Haskell syntax is kind of nice.
Saves you some parenthesis(looking at you, Lisp)
But "Functor", "Monad", "fmap"?
Just name it "Mappable", "Joinable", "map"(make this List-only function generic as "fmap" is).
I watch this 47 times a day
A lecture on arbitrary semantics and controversial distinctions. Entertaining none the less.
This talk makes me appreciate JavaScript (in the form of TypeScript of course, as a sane developer) even more. So many of these things are just built into the language that you bring up these concepts that sound big and complicated and then I go "ohhhh, it's just an arrow function" or "ohhhh it's just a JS object" and it simplifies a whole bunch. Even typing of lambdas is easy, because the type of an arrow function in TS is (type_of_first_param, type_of_second_param) => return_type. TS is really an underappreciated language as just a general programming language away from the web frontend!
The 1932 foundational article is accessible here www.jstor.org/stable/1968337
Technically, in `square(x): x * x` all of the open bindings (free/unbound variables) are closed by the lexical environment, by virtue of being a vacuous statement. It might make sense to distinguish a wholly-self-contained closed expression, and one that incorporates various levels, but strictly speaking, I don’t see the value in limiting “closure” to cases where the enclosed lexical environment is empty.
My man out here barefacedly using Comic Sans and Papyrus on his slides like they're not crimes against nature.
comic sans is the font that simon peyton jones uses in all his talks
Comic Sans is recognized as an accessible font for visually impaired persons. We should stop demonizing it.
@@silvianaamethyst Nah, we should put them out of their misery. Imagine not only being disabled, but having to look at comic sans to read anything. I'd rather be dead.
I think sometimes maybe nobody actually knows what closures or lambda is, like Taoism. If you actually did know what it is, you could describe it concretely and easily rather than giving definitions which mean nothing. People trying to define a closure would also define a tree as 'part of a forest' rather than 'big tall solid thing made out of wood and branches with leaves on'
Fantastic origins talk! Turing gets a lot of buzz but it really does all go back to Church.
Very engaging presentation. I think I got about 1/2. But the history included was great.
If you want deeper understanding, watch MIT SICP lectures from Sussman and Abelson. It's long, but even the few first hours are litterally "Wizardry". The book in itself is a treasure (Structure and interpretation of computer programs, that's the blue book in the middle of those slides).
Related to the final part: David Beazley did the lamda implementation of logic and numbers etc. using Python in ua-cam.com/video/pkCLMl0e_0k/v-deo.html
Yeah, idk, my use of lambda comes in waves.
As an engineer who occasionally writes C++ to help me do my work, I have one question, "What are the actual benefits of lambda functions in actually getting something done?"
fwiw, for me, it made passing functors for use with std::thread and co. easier (to read) than using things like std::bind. actually, it made passing functors to anything that need them easier in general.
The most useful property of them is probably, that you can parameterize algorithms. Some examples:
std::sort(myVey, [](auto a, auto b){ return a < b; });
Here the parameterization is of course uninteresting, since std::sort by default uses operator 1000; });
But you can also further parameterize that:
auto someValues = filtered(myVev, [legalAge](const auto& value) { return value.age > legalAge; });
So here the benefit is, that you don't need to implement arbitrary operators or write different filter functions. You can parameterize existing algorithms right there without defining something out of band. This makes code shorter, simpler, easier to read and maybe even faster.
@@鲍凯文 Will look into that, if I ever have to use threads.
@@deepbluev7.x813 Thanks. I guess because I use C++ to do procedural data processing (probably incorrectly as use singletons to wrap up data and functionality into classes), I don't really use short functions, i.e. squaring a number. But will look out for opportunities to use them as you suggest.
@@axelBr1 They tend to be sneaky little ninjas. Once you used them a few times, you tend to find more and more places where you can use them. For example to initialize a const or static multimember struct.
Answer = (condition < n) ? IfTrue : ifFalse;
29:45 I've been looking for this for a while, I knew I'd learnt about actors being equivalent to lambdas but forgotten where.
43:28 you CAN in fact name the type of a lambda expression in C++, just not in the traditional C-way.
ie: [] () -> float { return 1; } which returns a float with value 1. Even works with specific types where you implicitly call the constructor with an initializer list;
[] () -> MyType { return {1, 2, "foo"; } would construct an instance of type MyType with parameters 1, 2 and "foo".
"Moderately insane" hahaha i love this guy
(at 54:59 ) There is no reason to have 'false' = \a b -> b . You can swap them and nothing changes as long as rest of definitions are consistent (e.g. 'false' = \a b -> a, 'and' = \f g -> f f g, 'or' = \f g -> f g g).
I suppose he knows… really this “false=0”-explanation: it’s a joke.
there is no reason for anything to mean anything
54:54 As before with the _2 name for "apply twice", the names "true" and "false" are misleading here. What this actually is is ifTrue and ifFalse and it's called on a boolean value or expression.
(x < 4) ifTrue: [x++]
or in others:
y := (x > 4) ifTrue: [x * 2] ifFalse:[x + 7]
You wouldn't want to write
(x < 4) true: [x * 2]
That would be confusing af.
@@PaulPaulPaulson not sure I understand where that syntax coming from. I think you can imagine that Church encoding defines data in a structure that is useful in future operations on them. In particular:
add = (a, b) => n => z => a(n)(b(n)(z))
mul = (a, b) => n => z => a(b(n))(z)
For booleans it is even easier to imagine since they explicitly represent branching:
nonzero = a => a(_ => f => t => t)(f => t => f) // from natural to boolean (first argument corresponds to false)
gt = (a, b) => nonzero(leftover(a, b))
Thus after providing values of both branches to boolean you collapse it to one or another branch:
y = x => gt(x, _4)(add(x, _7))(mul(x, _2))
Full example: tinyurl.com/qv9u6k4
Functional evidence that we cannot successfully define anything without redefining it ad absurdum... carry on.
In fact the false being equal to zero really IS arbitrary and just a matter of convention and convenience. It depends on how you use it. You could switch the definition of truth and false, making true equivalent to zero, as long as all your if-then-else statements become if-else-then.
It can actually make sense if you consider you may fail in many ways doing a thing but there's only one way to get it right. (Eg. returning "0" by convention from a C/C++ main program if everything went OK but a number unequal to "0" to indicate what failed).
Thumbs up from a former Smalltalk programmer for mentioning Alan Kay
Common Lisp may be a monster, but it is a very cuddly monster!
Embrace Lisp, and it will embrace you!
I have been thinking of looking at some functional programming languages like Haskell, and from the descriptions and talks it always seemed to me that the language was doing things in a very similar way to some of the spreadsheets I've cooked up in Open Office. And then I see this at 14:15, and yup.
I once wrote a completely LEGIBLE shopping-basket application in Perl.
The presenter missed some other weird notes: Currying, if named by origin, should be called "Schönfinkeling". (See his paper "On the building blocks of mathematical logic" reprinted and translated in _From Frege to Gödel_.) Also, as for true and false - in other work in the foundations of mathematics from the 1920s and 1930s, true "was" 1 and false "was" 2. (In the work of Gödel, for example, if I remember correctly.)
in my formal semantics class in college, we called it "Schönfinkelization"
@@piraka_mistika Neat - did you use a text that had that "right"? If so, which one?
@@logiciananimal I don’t think we used a textbook
40:09 [](){}() Silence of the Lambdas! Brilliant!
Henney spends a minute talking about "Hiddenness" starting around 33:00 - can someone please explain what he's saying to me?
When you don't add names to types and variables, the maintenance programmer will kill you after they had an allnighter of Unnamed doing Unnamed on Unnamed and passing it to Unnamed. You should not look to the Smurfs as your source of reliability.
So much this. I recently had to debug an issue in a class that was a giant pile of nested anonymous classes and functions, and it was very difficult to follow, even with the debugger. When it works, it's kind of cool and magical, but when it doesn't... well... it's hard to find the problems.
14:10 Microsoft have since included lambdas in Excel
6:15 “[Functional programming] is so pure, that even if you unplug your computer, your functional programs will still run. Oh wait, that doesn't work. Yeah, they're riddled with side effects. All of this stuff is an illusion.”
Every single function takes some amount of processing time as an input before it can decide on the appropriate output, but I've never seen a useful way to acknowledge that. At least not in the way in which we're more and more often starting to do when composing functions with other sorts of expected effects. Even Haskell includes ‘never returning’ as a valid output of any type of function. Is it ever possible to statically assert that the runtime of an algorithm stays within a certain ‘big O notation’ with regards to (some of) its inputs, or is that what the halting problem says no to?
Interesting question. I'm sure what you're describing here can be reduced to the Halting Problem, since there's no way of knowing if the execution time is just dependant on an extremely large constant. O(n) and O(n^n) could take an equally long time on the given input if the constants for the O(n) solution are large enough.
But just like the Halting Problem itself this only applies when we look at things in a very general sense.
For example, determining that the function
int retX() {
return 1;
}
halts isn't really very difficult at all.
We can also very easily determine the time to execute this as being constant.
We have something called formal verification tools. You can have a look at Dafny. And these tools allow you to set up some information about your code, and the verification tool will prove whether the facts about the code hold or not. And as per the halting problem this cannot be done in general. But a lot of the time we can do it on specific code, by reducing it to SAT solving and other problems that are possible, even if they are hard.
Now I don't think these formal verification tools currently do O verification but similar techniques could definitely be applied to verify a function's O time or space complexity.
I never get told about replies to UA-cam comments, if you'd like to discuss this further, you can email me
casperes1996@me.com
I am having an exam on Computability and Logic on Saturday so I'm quite in the weeds of all of this at the moment ;)
11:30 re parenthesis, just to make things easier, like to see a compiler say "Yea, I understand that": You're describing Perl.
for the parenthesis thing, have a look at F#
Programming with emoji.... you're getting dangerously close to APL!
Nice talk ... yet ... what is the definition of "problem"?
Optional parens for function arguments... like to see a compiler deal with that..... hello Ruby.
I watch this 47 times a day
So... In short computers are the expression of lambda using technology.
(Today's) computers are the expression of lambda where X is a transistor powered *on.*
45:51 - "it's a big old mess" -- yeah, that's Java in a nutshell, right there. ;) ;)
edit: 46:04 - "there's a lot of duplication here" -- oh yeah, that too. :)
Where's the lie tho?
2:24 : excellent pronunciation Kevlin
Right, I couldn’t help but notice that, too-it’s somewhat rare to get these kind of spot-on pronunciations of German words from English‐speakers.
His name is Kevlin though.....
Frank Steffahn corrected...
*sigh* Perl is only executable line noise if you choose to make it as such. It's just the culture of people who thinks it's cool to golf everything always crossed with Perl's ability to actually be golfed down to small sizes that has contributed to some of the most hideous examples of coding line noise. IMO, the extreme readability of Raku (née Perl 6) code is evidence of it. At its core, you can write many things virtually identically but most folk who write Raku code (if you peruse the ecosystem) are stunningly easy to read and follow, even without an IDE doing syntax highlighting.
I'm thinking of jumping into Raku. Your tiny review of it's making me think extra . . .
Silence of the Lambdas. XD I'm dead.
1:30 What about eigenvalues????
And probability distributions...
@@ingframin And Lebesgue measures?
Actually, eigenvalues and wavelengths from physics are one and the same. The problem of eigenvalues arrised from algebraic treatment of partial differential equations, most notably these of waves. The same later applied to Schrödinger's equation, which is equivalent to Hamiltonian approach to QM. Again, lambda. And in its original form there lambda was a energy of a state, which is directly and explicitly related to wavelength. That is why also we call "set of all eigenvalues" to be a "spectrum". Just like the light has spectrum. One and the same.
44:10 only reason it doesn't like that is it doesn't know the type of x. Use explicit typing and the compiler will accept it.
Action nop = ()=>{};
nop();
His jokes are clever. But his timing as a comedian is so terribly off that they all fall flat.
Still a very interesting talk.
A lot of words about - what ?
Thank you
Perl is executable line noise... Ha! I laughed.
in every presentation, He has the same thing to say "it was done before" and then goes on to show some 60s or 70s stuff to back his claim even if it does not. search youtube for all his video same rhetoric you would hear!
Yeah Henney tends to recycle his material quite a lot.
Weird you say this on this talk though since this was different to most of his other ones. He didn't even mention the Singleton whiskey! - I love him though. Big fan.
So why did I decide to implement those lambda numbers in Kotlin again? Jesus christ what a mind fuck. Got it to work though I guess...
I'm now going to watch the cricket.
"It turns out" is a Kevlin Henney phrase.
In a talk about functional programming, he should really say "turning out is applied to it", instead
14:23 Excel now has lambdas.
Impressive af!
Commenting from the future where excel now has lambdas 😄
I thought he meant that in Excel you can just do =A1*A1 in a cell and get a result. That's a lambda since you're not actually naming a function elsewhere and then calling that in the cell
Too bad he didn't include Factor (the language) in his talk.
52:00 _2 is not a good name, because it's not the number, it's an "apply twice" and should be named something like apply2Times.
Only if you bind it to the increment function and a start value of 0 it becomes an integer two.
_int2 = () => apply2Times(n => n +1)(0)
This of course limits it to integers, but you can easily avoid that by adding an argument for the number type. The number type would be an interface which provides the zero-element and the increment operation.
_2 = (numberType) => apply2Times(n => numberType.increment(n))(numberType.zeroElement)
_int2 = () => _2(int)
_double2 = () => _2(double)
That's a "beauty". Church numerals "have no meaning", but only expression structure. Once you define terms you can collapse to "value" encoded in it.
as_int = a => a(x => x + 1)(0)
as_float = a => a(x => x + 1.0)(1.0)
as_bool = a => a(_ => true)(false) // 'add' becomes 'or', 'mul' becomes 'and', anything beside '_0' becomes 'true'
as_roman = a => a(x = x + "I")("")
In pure lambda calculus, there are no numbers. There are only lambdas. So if you want to represent numbers in pure lambda calculus, you have to represent them by lambdas. Lambdas that are built from nothing but lambdas.
JavaScript isn't lambda-calculus, but as it has all ingredients that lambda calculus requires, the talk can use it to show how this would be done.
You're taking JS's built-in numbers too literal. (Pun intended.) In the talk, they're only used for illustration. Assume a JS variant that doesn't have them. You'd have to re-build everything from first principles, i.e. (here) from lambdas.* "apply twice" would be one of the simplest representations of the concept "two" that you'd be able to construct. If it's the closest thing to "two" that you can ever get, you might as well call it "2". Except that JS identifier naming rules don't allow a non-literal to be named "2", so you name it "_2".
*By the way, lambda calculus isn't the only thing from which you can pretend that "numbers don't exist, but let's (re-)invent them". You can for example also do this from the axioms of set theory. Set theory only cares about sets. It doesn't know about "numbers". But it turns out, you can construct sets of sets that perfectly represent numbers. For non-negative integers, you could do this:
0 := {}
1 := {{}}
2 := {{{}}}
3 := {{{{}}}}
⋮
or
0 := {}
1 := {{}}
2 := {{}, {}}
3 := {{}, {}, {}}
⋮
or
0 := {}
1 := {0} = {{}}
2 := {0, 1} = {{}, {{}}}
3 := {0, 1, 2} = {{}, {{}}, {{}, {{}}}}
⋮
Now obviously, {{{{}}}} ≠ {{}, {}, {}} ≠ {{}, {{}}, {{}, {{}}}}, so how can they all be 3? They're different ways to represent numbers using just sets, and cannot be mixed. Just like "3", "III", "three", "drei", "trois", "tri" are different ways to represent that number. You have pick one convention of representation / know which convention is used, or there will be confusion. Likewise, _2 as defined in the talk (the "apply twice" thingy) and JavaScript's build-in 2 are not the same thing, but can both be used to represent the more abstract concept of two-ness.
I don't think it means what you think it means.
I do not think it is defined as what you have been accustomed to define it as.
( Not trying to show off, this is a joke. )
ua-cam.com/video/dTRKCXC0JFg/v-deo.html
@@kavorkagames WAIT IT'S THERE TOO?!?
Lambda 3 => When()
Awesome talk, but yellow text on a pink background using the font papyrus is cursed lol
I feel asleep. was great. What would that lambda look like?
Doesn't "schema" have 6 letters?
Schemer not schema. It followed the pattern of mapper and planner.
25:18 Wizard Book
Excel now has lambdas. Prepare your bunkers.
11.20 GROOVY
Lambda is the GNU GOTO.
So this is how Bret Victor's glorified typewriters came about....
Perl's line noise? Horse hockey! If you can't write readable Perlish code, you need a strong talking to.
I mean, this is clear as crystal.
.
E.g.,
.
for $bob ( 'fnord', 'slack' ){
.
$multiply = sub{ (pop)* 2 } if $bob eq 'fnord';
.
$multiply = sub{ (pop) * 3 } if $bob eq 'slack';
.
print $multiply->(21), "
"
}
so... actor is just a lambda with a mutable closure? ... is the idea? ... (and synchronized)
...hmm.... I guess so...
ayyy, British humour, there is something posh about it..
Oh god functional programmers are the worst it's like maths nerds and computer science nerds had babies. I just tell them if you're not allowed side effects how do you ever do anything useful. Check mate.
See Haskell!
The purring airplane interspecifically spare because place curiosly announce given a knowing tie. three, clever handle
The squeamish cross acromegaly travel because page gratifyingly tire despite a stimulating node. awake, wacky plastic
The instinctive zephyr statistically open because diving additonally place between a super pumpkin. erratic, anxious singer
He lost me when he said use Powershell
I mean, it's as close to a decent command line as Windows had access to prior to WSL...
It's the most well-organized popular shell language. It has the symbolic power of Microsoft against it, and it suffers from some verbosity perhaps (subjective, of course), but in itself it is a solid and creative re-imagining of what it takes to be shell language.
i didn't understand shit :(
E=mc^2
F=ma
iħΨ̇=HΨ
Yeah, idk, my use of lambda comes in waves.