I prefer the haskell solution to the APL one, because in haskell everyone can get a clue of what is going on whereas with APL you have no clue what you are looking at.
@@carlislenightingale8853 It does require knowing what two symbols mean. But the Haskell is in a similar boat because very few people would expect reading it what the compositional operator is actually doing. For people who don't know it's essentially modifying the order of operations.
As a guy who inherited application support for 500,000+ line code base of C and a couple 2-3000 line APL applications i can tell you, I'd much rather see the word MAX written out rather than have to look up every single glyph i stumble upon at 3:30am on a Saturday morning.
Thankfully I've moved on from that job, but the main issues was always taking down one of those special keyboards with 4-5 symbols printed on every key.
Could someone enlighten me on some history of APL? It looks so much like an esoteric language, so why is it in actual industry code? Do people just love it that extraordinarily? Does it have some unique killer-feature, or what?
@@alhabshi3k I've been playing with both and prefer the old character set. It was developed by a Harvard professor who was frustrated with the inconsistencies and practical difficulties of conventional mathematical notation. He won the Turing prize for this work. The glyphs are mostly mnemonic and are easier to read than J once you get used to them. Which do you prefer: two plus two equals four 2+2=4 In the best APL version there's an IDE with keyboard shortcuts and point and click buttons. You quickly learn the most common glyphs, and you are writing so much less code that it's really a non-issue. If you get hooked, you can buy a dedicated keyboard for £100.
On my Portuguese keyboard, a curly brace ({ or }) requires pressing 3 keys. Think about how often I need to type that in C++, Rust, Kotlin, Swift, etc. This is one of the reasons that I am now taking a hard look at Haskell or any other ML.
In the Scala solution, it is recommended to skip the curly braces. What you are doing here is creating a block. Blocks return their last expression. This block only has one line and is thus unnecessary.
If you don't mind spicing the type signature up you can do this in Rust: fn maximum_wealth(v: T) -> Option { v.map(Iterator::sum).max() } The code in this one looks a bit nicer but the data type are just crazy and you have to parse the vectors first as an iterator. Also notice that by returning an Option you can get rid of .unwrap(). This has also the benefit of returning None when your outer Vec is empty and does not throw an exception like other languages would
@@MrAbrazildoI don't know enough about c++ template( it think the std:: accumulate, max, ect is using templates) to say if the compiler will properly optimize them. But 1 think for sure. The good old for loops will obliterate all other solution. I liked the very first one. Very imperative, very procedural. Easy to follow. The Haskell one was nice too, clean and probably as readable for experienced haskell/FP programmers. But the "improved" C++ ones...
For me readability is king, only when performance is an issue I would go with more technical solutions. When working with a team, this kind of fancy code tricks make everyone's life a nightmare debugging. Although it is nice to see the possibilities.
What do you think is unreadable here? Maybe functional C++ solutions, because of a gazillion of namespaces, but otherwise everything is pretty clear. Yes, even APL (in this case at least)
I've been around a long time, learned COBOL on a Burroughs mainframe in the 80s and do cloud architecture and security now. I knew about APL, Haskell, Scala and Clojure but had never seen a solved problem before. All you hear about these days is JS and Python and maybe Go. Thanks for the video, you presented and explained the material very well. It clicked with me vs. a lot of the other coding channels. New subscriber.
I'm learning ocaml and here's the solution I would use (with the Batteries library): let maximumWealth = Array.(map sum %> max) Without the Batteries library you don't get sum or max, but they're easy enough to define in terms of folds: let maximumWealth m = let max arr = Array.fold_left max arr.(0) arr in let sum = Array.fold_left ( + ) 0 in m |> Array.map sum |> max
Scala solution is most readable and elegant at the same time. It seems like a pseudo code. accounts.map(_.sum).max Even you don't know scala you can read this code and understand what it does.
@10:40 that's because you hardly EVER just take the max of two elements in mathematics. Usually you want a max over some array or set, so you always need to feed ⌈ into some combinator. In other words, because we don't teach combinators in schools, a single-glyph for the "max" binary operation simply isn't very useful. In much the same way, mathematics notation doesn't actually use the symbol "+" for the equivalent of APL's `+\`, instead they use something like an upper case Sigma with explicit range bounds etc. I would agree that it would be very nice to teach combinators in middle school, and even teach kids with stuff like Haskell or APL.
Very cool stuff! But I have to disagree with your ranking here. IMO, Rust is CLEARLY the winner, just because it's the only one that forces you to explicitly state what you want to happen if the sequence is empty. In practice, that's a huge achievement. And although it's doing that, the implementation is still very clean and readable (and also easily parallelizable).
In Haskell, all you'd have to do is change the type to Data.List.NonEmpty, and that corner case wouldn't exist (relocated to input validation where it belongs). It's also the difference between Monoid (e.g. Sum) and Semigroup (e.g. Max). And yes, they're easily parallelizable (start with Control.Parallel.Strategies). Simlarly, the complaint that the reduction wasn't exposed as the same operation could be trivially dealt with using foldl1 or sconcat. Partial functions including head and tail are still controversial, and it's possible to hide them (e.g. use classy prelude). Still, having exceptions beats silently regarding asm as safe imho. Perhaps the most infamous partial function is divide. Does rustc also enforce nonzero divisors?
Haskell version in which "maximum" is last because it happens last: map sum >>> maxiumum in which each sum is computed in parallel: maximum . parMap rseq sum in which the two reductions are explicit: ala Max foldMap . fmap (ala Sum foldMap)
Maximum doesn't happen last, it happens progressively. Maximum begins and upon requiring the sum of the first list element, a sum of one list begins, and so forth. Depending on the selected numerical type, the sums might progress together too, though I couldn't give you an implementation of that type and it's NUM instance off the top of my head.
@@tricky778 It's theoretically possible, but in practice parallelizing everything makes programs slower, not faster, because it introduces some overhead. That's why annotations are currently needed. In theory a sufficiently smart compiler might be able to only parallelize where it improves performance, but that's much harder!
For-for loop and left-folding are single-pass algorithms. Mapping and then searching maximum needs two passes. 2d array to number folding in clojure: (reduce #(max %1 (apply + %2)) Long/MIN_VALUE accounts)
Oh, and why apply over reduce? stackoverflow.com/questions/3153396/clojure-reduce-vs-apply#:~:text=The%20beauty%20of%20apply%20is,work%20with%20variable%20args%20case.
@@bediakogeorge4488 nice quotes. I don't like java-world constant in my solution. You help me to avoid it at all. 1. I want to illustrate why fold is more preferred than mapping and searching => (reduce ,,,) 2. Reduce => I expect to see custom (fn [acc el] ,,,) and maybe a big transient accumulator 3. When I already have a function in my left hand and a collection in the right hand: cons-this-f-to-list-and-eval ~ apply ------------------------ 4. You ask why I prefer one function over other in case they both work. Because of link, I assume you know classic answer "it's not about speed, it's about the way of thinking". Speed: - dropping intermediate results is anyway more important - usually you don't care if they have a little difference in speed because faster collections = faster computing (I bet matrix libs or java primitive arrays with areduce are faster) - unchecked-math magic can change everything Comfort of a man who write both functions in one top-form: Reduce is like a contract "I'll convert collection to result". Do it. Why you need second reduce if you have only one collection?.
The APL solution as someone who doesn't know that language or someone who is new to the language is horrendous. Not to say the C++ solutions are ideal but the for-loop one would be very obvious as to what it's doing for anyone even reasonably familiar with programming. (The rust solution also) I think the Haskell solution is probably the cleanest as someone who has never even programmed before might be able to figure out what it does somewhat. But even if you were Dennis Ritchie himself you wouldn't have the slightest clue what the hell the APL solution is doing without already being very familiar with the language. Less lines of code doesn't always mean better code.
The question is ill-formed because what happens if the array is empty? Rust makes this problem apparent because you called unwrap. It also showed why I think algorithms in C++ is not entirely useful because it requires so much ritual to get anything done that you question why you did it in the first place. Without pipes, the control flow is not entirely obvious. Admittedly, some of these are done for performance reason, namely, entering the begin and end pointer makes sense in low level terms, but not entirely natural in the way piping are done since something like this is usually achieved by performing a filter instead.
In APL, if the argument to this particular program was an "empty array" (array whose dimensions were zero by zero, or 3 rows by zero colums, or zero rows by 3 colums) the program wouldn't crash, it would simply return you an array of dimension zero by zero
@@tcnymex that sounds....useless and at best sounds like an honest mistake. Max operation simply does not have an identity so its literally impossible to call Max on an empty array. I have spent many nights pondering upon this problem but I am simply too stupid to come up with anything clever beyond returning Option
@@cataclysmwarshulduar One can count negative infinity as an identity for max. But adding a negative infinity to your numbers is pretty much like adding a None value with Option, so yeah..
the APL solution looked like a trick, similar to those you can do in C, and everyone tells you not to do them, but you still like them and do them. I agree with you that there should be single char operators for mix/max as they are used too often. The single char reduce operator was the division operator and this is confusing if you don't know APL, which I don't, but I get it. Nice.
You can write the Haskell solution in Clojure! (def maximum-wealth (comp (partial apply max) (partial map (partial apply +)))) "apply +" is needed in Clojure to use + in vector form "apply max" is needed to use max in vector form "partial" is needed as Clojure does not curry functions "comp" is the Clojure way to do function composition Having said that I believe your Clojure solution is more idiomatic and the Haskell solution is cleaner than either Clojure solution!
My Solution in Rust would be accounts.into_iter().map(IntoIterator::into_iter).map(Iterator::sum).fold(0, i32::max) yes we could use the advantage of .max(), but it return an option and I don't really like unwraps, you could do a unwrap_or(0) but thats one more line, fold is as readable in my opinion and let you directly place a default
I never used Haskell, only idris, but I'm sure there's a type for non empty list. Which will force the caller to deal with wrong input and not the callee. It's better imo, because having to handle error at each step can be very repetitive. I'd go as far as saying that for procedural solution. An assert(size) would be better. After all, we do not expect an empty list to be passed to this function. If somehow an empty list is passed, its not our( the function) responsibility to check it. Its the part of the program that calls the function in the first place
In JavaScript you could do a=b=>Math.max(...b.reduce((c,d)=>c+d,0)) which I think is a great balance between simplicity and readability. You unwrap b with ... after calling reduce on b which takes two arguments: a function and an initial value. c is our sum/initial value and d is each element of b which is then added to c which becomes our new c next iteration.
(For me) Haskell is the most elegant language. APL just has massive readability issues (for me?). The outlined good point about APL (i.e. explicit 2 reduces) might be a bad advise when considering from the perspective of other langs. reduce is just too general and its existence in code is problematic (from readability standpoint) IMO, I always prefer to use something more restrictive. Let me illustrate that with an example here is the solution just using reduces. maximumWealth = foldr1 max . foldr (\x xs -> (foldr (+) 0 x) : xs) [] Everything here is a reduce (foldr is Haskell's version of reduce). i.e. maximum is a reduce (maximum = foldr1 max) (foldr1 is just like foldr but uses the last element of the list as base value) summation is a reduce (sum = foldr (+) 0) mapping is a reduce (map f = foldr (\x xs -> f x : xs) []) put another way, when you encounter reduce first you have to figure out what it is doing then you understand the solution (which is the same problem with for-loop). That is why I don't like to use reduce ever as a general rule. I say the solution, where there is no reduce, is far simpler to read - maximumWealth = maximum . map sum
It is worth noting that maximum has to be a partial function (throw exception or return something like `null` for empty list). Some people (including me) consider using it a bad code. In Haskell, you could use total alternatives, e.g. safe package: maximumMay . map sum
It's interesting that the OO languages have effectively given up on their claims that OO is the solution to issues of reliability and complexity. Almost all the new features in C++/Java/C# are watered down functional ideas. So why not just use a proper functional language like Haskell or F#?
I think because first performance and memory. Functional programming tends to use a lot of recursion and immutable. And 2, because FP and PP each have their strength depending of the problem: for the problem in the video, FP would be the way to go. Its really adapted and readable. The haskell one was the cleanest by far. Pure procedural, basically the first C++, which was kinda a C solution. Was clean and easy to follow to. And imo, easier to modify if the problem changed a bit. But for some problem, like the ones that need you to look at neighbors in an array, PP would usually be cleaner. I'm eager to know if a better way exist in FP, but the +1 is the best by far with my (limited) experience. I don't see the point (and difference) with the Haskell and Apl solution. Its the same, but someone decided to rename max, maximum, map, with weird characters. By unit of code. Both are 4 longs.
solution of same in my language to that i had created: array vec = [ Array[4,5,6], Array[6,7,8], Array[88,88,0], ] loop len[{vec}]: uelem vec:$$I = sum[vec..$$I] arr richesIndex = indicesof[{vec},max[{vec}]] print [richesIndex]
I love videos like this! I think Java would be a worthwhile addition to these...especially if you're going to do 3 different versions of C++ lol. Java streams results in a solution that is _very_ similar to Rust, but possibly more readable (assuming UA-cam doesn't destroy the formatting lol): int max = Stream.of(accounts) .mapToInt(a -> IntStream.of(a).sum()) .max() .getAsInt();
In my opinion, in the second line it's more concise and readable to do .mapToInt(i -> i).sum(), instead of invoking the IntStream method. Although you do need to know why the mapToInt(i -> i) is necessary - to convert Stream to IntStream by automatic unboxing.
@@martinlukas6097 I had to go back and think this through again haha. My example is expecting "accounts" to be of type int[][] (i.e. an array of int arrays). So, the IntStream.of(a) call is taking an array of ints, converting it to an IntStream, and then summing them. I'm not sure where calling mapToInt(i -> i) could fit in there and have it still compile. mapToInt() expects you to map each item in the stream to a single integer, so calling sum() on it won't work (sum() is a reduction that only works for IntStream).
I suppose you technically could make the Rust solution point-free if you use the function form of the .iter method if you're ok with an extra map: accounts.iter().map(Vec::iter).map(Sum::sum).max().unwrap() My Rust is a bit rusty though, so I'm not sure if this'll work
Excellent! My father worked at IBM at same facility as Ken Iverson, inventor of APL. My father noted how Ken had his own printer, which in the 1970s was bigger than a fridge and cost a fortune. Why? “Because he invents things.” Now, can you please explain monads?
Monad is just a thing you can use a flatMap with, that's all :D Think it through. Sometimes its called bind, chain or is hidden inside a language feature like async/await in JS, but it's still the same. Ugh, the "monad" drama is so boring. Oops, the m-word drama, considering the monad word is forbidden in some "circles" 😄
@@witoldsz80 frankly a statement in c++ is an element of a monad isn't it? A function call is flatmap, semicolons between statements are the monoid operator.
I don't like that APL one at all. Seems fine in isolation, but I reckon it would have the Perl problem of a script being a cluster fuck when you come back to it years later
In this example the problem fits perfectly the language (APL). In every day programming rarely you will find a mathematical problem like this. What about a web server connected to a MySql db ?
I am by far no expert, but AFAIK, the Dyalog implementation of APL has .NET interoperability. Also, a Database table is more or less a matrix for APL, so you could actually use its strong array capabilities for processing databases.
95% of the PHP code I write for boring REST CRUD app is based on collections and calling .map(), .reduce() and .filter() on them. This is more everyday stuff than than most other problems
We can get a functional definition with JavaScript too! With: const maxWealth = compose(max, map(sum)); We just need this simple helper functions: const map = fn => list => list.map(fn); const sum = list => list.reduce((acc, val) => acc + val, 0); const max = list => Math.max(...list); const compose = (...fns) => arg => fns.reduceRight((value, fn) => fn(value), arg);
yea, he has no clue what he's talking about. "i love algorithms". he's gonna have a heart attack when he discovers all algorithms are made out of compare and conditional jumps :D
Agree, though i could be swayed to one of the newer versions for the built-in parallelism. Since everyones phone has 4 cores to throw at the problem. Lol.
That's exactly what I thought. The one with for loops is simple and easily understandable. Having said that, I'm an engineer who programs things, rather than a software developer, so I'm not particularly familiar with modern c++ features. Neither are the people who might try to read or maintain my code though, so for loops seem like the way to go for me. I don't really get the benefit of the newer features.
Don't for loops expose the technicalities way more? I mean, the only thing closer to the actual implementation and farther from the underlying idea would be explicit go-tos and conditional jumps… IMO, Haskell version is the most readable because it explicitly states we want the maximum of sums. I.e.: maximum = maximum of = . sum = sum s = map The C++ 'algorithms' versions are kind of halfway through. Like if someone inlined the definitions of maximum and sum by hand. But loops are one step of inlining farther. And bring mutability to the table - which is probably the actual reason he dislikes them.
3 роки тому+7
i think the haskell and APL solutions dont look that beautiful. They are cool for sure though. For me beautiful is easy to understand with minimum LoC. Haskell soln might be easier to understand but APL is just crazy.
It's all fun and games until you have to debug 1000 lines of APL and it's just made out of characters. :) Though very informative and good video. Thanks
Now imagine having to debug the same program, but it is 100,000 lines of C++. And I’m not defending APL in this, I am honestly not sure which would be the greater pain in the ass.
Often your application / systems code will be lots of names (which you have created yourself) and be just as readable as any other "traditional" language. Just like any other language, you can write code that is unreadable for yourself or others, or you can write code which is perfectly readable and easily maintainable. The power of APL comes when you have to solve problems for yourself, and suddenly you find the relatively small collection of versatile primitives is quite satisfying to work with. On the topic of readability, this bit of conversation had some interesting points chat.stackexchange.com/transcript/message/55173403#55173403 - as an APL chat room, the topic of "why does every think APL is unreadable" comes up every now and again
In clojure + is a variable arg function and calls reduce internally for more than 2 arguments. So below is enough (->> data (map (partial apply +)) or equivalent (map #(apply + %)) (reduce max))
A program is beautiful because it is beautiful, not because it is short. I'm fine with people playing code golf but not the entire world is a golf course.
Something I was playing with recently was writing sha256sum in C, C++, Rust, Python and other languages. I had no idea where to begin writing it in Haskell, so googled around and took a peek at what someone wrote earlier. For that, the Haskell ends up verbose and ugly compared to C and Rust.
What about safety, if the input is empty would all of them crash? In rust, because you used unwrap it is clear. You could have returned a result so the caller can decides how to handle the error
In APL, if the argument to this particular program was an "empty array" (array whose dimensions were zero by zero, or 3 rows by zero colums, or zero rows by 3 colums) the program wouldn't crash, it would simply return you an array of dimension zero by zero
In every language you can handle the error if you want to using conditional flow. This causes undefined behaviour if the error is not handled. In Rust you have to handle the errors mandatorily.
Damn, thanks a lot! I was just recently thinking maybe I could make a vector calculator with what I had learned and combining this problem to what I had I actually managed to make a pretty neat program that prints out a matrix. Next would be deciding if I should try and use these methods for the arithmetics. Noteworthy: #include for the C++20 solution, it's not you need to import despite the name std::ranges:: and std::views::
I know I'm late on this video, but the Scala solution could have been way shorter, actually you can declare it as a simple lambda expression :) val maximumWealth: Seq[Seq[Int]] => Int = _.map(_.sum).max
C++, using FunctionalPlus: auto maximumWealth = fplus::fwd::compose(fplus::fwd::transform(fplus::fwd::sum()), fplus::fwd::maximum()); will work on any container, and any numeric type.
Could you do a performance comparison with larger arrays, such as 1,000,000 x 1,000,000? How much memory is used? How large are the executables, etc. Thanks! Really interesting video.
My favorite was the haskell one, then clojure, then rust. The haskell solution is very elegant and easy to read, the clojure one is a bit verbose but I like how simple the language is. Rust is a good middle ground with more procedural low-level style. The APL is beautiful, but it's not easy to read unless you are familiar with it. People already have that problem with haskell. I wish math wasn't such a boogieman in education.My favorite was the haskell one, then clojure, then rust. The haskell solution is very elegant and easy to read, the clojure one is a bit verbose but I like how simple the language is. Rust is a good middle ground with more procedural low-level style. The APL is beautiful, but it's not easy to read unless you are familiar with it you wouldnt be able to figure it out. Haskell, if you figure out the top is the signature, you can get an idea just by reading it. People already have that problem with haskell. I wish math wasn't such a boogieman in education.
(N.B. This comment is about ADSP podcast; didn't see anywhere to leave a comment on an episode on the podcast page) Conor, My ears perked up when you said any scan is parallel-isable The the formal definition of plus scan a series of numbers like iota 4 equal to 1 2 3 4 is a series of reductions +\ iota 4 is: (+/1),(+/ 1 2),(+/1 2 3),(+/ 1 2 3 4) Because plus is both commutative and associative you can parallelize the +/ first, and then run the +\ as four +/ on four separate cores each of which is running the plus reduce parallel algorithm But that only works for things that are both associative and commutative like plus or multiply, it wouldn't work for things like divide and minus scan ( you could, of course, run each of the four ÷/ required for a "÷\ iota 4" on a separate core, but I don't see how you could parallelize the ÷/ itself) But of course, being an APL guy, I let The interpreter do all the smart optimization, so I very much look forward to gaining some insight from your upcoming podcast episode discussing parallelizing scans. T.C.
If you happen to see this, any particular reason you don't like the inside-outness of the C++20 solution while being fine with it in the Haskell and APL solutions? If I followed correctly, all three are essentially max(sum(matrix)). I totally agree about the Haskell and APL solutions being far nicer. Was just curious if you'd happen to consider that or if it was a gut thing.
Just like you, in my day-to-day work, I avoid for loops, while loops, and do while loops like the plague if possible. I'm a JS developer by day, but I'm learning Haskell on the side. The more I work with Haskell, the more I love it.
Question: Is the later versions of the c++ solutions that are parallelized actually faster? The original solution is easily optimized by a compiler. Is the parallelization basically threads? Because I really think some AVX/SIMD and some forloop unrolling will do a better job speeding things up vs the overhead of dealing with threads for such a simple problem. If I've learned anything from pushing performance in C it's don't obfuscate what's actually happening, select which optimization strategy you actually want for the problem starting with very vanilla code, don't get in the way of an optimizing compiler, know your compiler flags because what people think is optimized (-O3) is nothing compared to what you actually get for free by throwing in a few more flags (as long as let those optimizations work by not giving it already optimized code that's been optimized in the wrong direction.) Because of that I've learned to see any time someone says "this code pattern is always faster.. because this feature", I can't help but say yeah right. I'm sure it's really fast. For example, most time when people use a high level utility to speed something up you get at best 4x speed minus overhead.. hopefully. I've seen that letting fully flagged optimizing compilers work their best it can speed things up ten fold on the regular. If you have really modern hardware that's only going to increase as there are more features to leverage. That's why not getting it its way is rule one for fastish code at ease.
Cool. Now do an http request, and find all instances of /get_file/ between single quotes and rank the results by an integer found within the string. Then try to convince people that the haskell that results from that is beautiful. Javascript, scala, go, and rust will do the best in that order. Then clojure, c++, haskell. I don't know how to do that in APL. It would be very cool if it stays elegant. Then I will have to agree APL is cool if it can do that. Not having 50 bajillion text types might help.
People are bothered by the smallest things lol. Some will prefer to not have to memorize a new symbol and instead use a very short word like min/max, but definitely to each his own. It IS interesting to have a symbol for those.
I will submit a Groovy one-liner that strikes a nice balance of readability and elegance: def maximumWealth(def accounts) { accounts.collect {it.sum()}.max() }
In my opinion, very few of these solutions are actually explicit in what they are doing. If you did not have background knowledge in these languages, you would see these solutions like hieroglyphics. I understand that this video is meant to show exaggerated differences between the languages, but in my opinion good code explains itself. This is not a complicated operation, so it should be clear what is happening for it to be truly "elegant" in my eyes.
That really is the key question here! And that's what puts rust lightyears ahead all others in this comparison. It forces you to consider that edge case.
@@danielsimon424 I'm not sure what you mean by that. You could imagine that max would work along with a typeclass Ord' that also defines a minimum element, I guess. In this case max [] (if [] has been inferred as an integer list) would return MIN_INT. I'm not sure most people would find that natural. Really, there are only two sensible choices : either give it a type max :: [a] -> Maybe a (which is the same thing as in Rust) or just throw an exception. It's a matter of taste.
Agree that reduce is the right abstraction, even though there are dozens of possible solutions. Another Clojure solution: (defn maximum-wealth [accounts] (reduce max (map (partial reduce +) accounts))) The threading macro is nice but not really needed here, usually reserved for simple function application where each successive return value fits into a simple additional function call. 'partial' always seems cleaner than anonymous functions.
// The JavaScript solution isn't far off from Scala: const maximumWealth = accounts => ( Math.max(accounts.map(sum)) ) // Although you do have to BYO sum function: const sum = _ => _.reduce((a, b) => a + b)
Here is another Clojure solution using no anonymous function and no loss of argument visibility from the arrow macro. (defn max-account [accounts] (reduce max (for [i accounts] (reduce + i))))
I don't get why you dislike the inside-outness of one solution, but are fine with the right-to-leftness of others. We're talking about functions; they're the same thing -- one just has parentheses.
My only thought is that the Haskell is pretty much the same as the Python: max(map(sum, accounts)). But, the python is more readable because the function application is extremely explicit and it's not point free.
What is the fixation with threading macros in Clojure ? I also find the anonymous function shorthand ugly - I much prefer: (reduce max (map (partial reduce +) accounts))
Alternate: (defn max-wealth [accounts] (reduce max (map (partial apply +) accounts))) Video went out of it's way implying Clojure did not have a 'sum' function (e.g. a function that can add more than two values together). Of course Clojure's '+' function does exactly that already (+ 1 2 3 4 5)
Once you get used to them, the threading macros are second nature to use and to read as well, and they make for concise clear code. In the beginning I had to stop and analyze code that was using them for a few minutes before it really made sense, but its something you can get accustomed too pretty quickly.
@@okcomputr I agree that they have their uses - but in many cases I find them just adding unnecessary complexity - e.g. the example in this video - why, when we have an established and simple syntax, choose to use a more complex one and turn a two-liner into a three-liner ? - I have often seen this done - particularly in a video that should be showcasing the simplicity, brevity and beauty of Clojure/LISP rather than it's ability to toss in arbitrary transformatory macros ? Why when you have finally wrapped your head around an "inside-out" Polish notation would you then use it to write a macro to allow you to scatter random pieces of "outside-in" code with more complex rules around the use of parentheses than your original ? Why, when you have mastered the concepts of recursion and nesting would you want a macro that hides all of this ? It is like trying to read a novel where every few lines you come across some cryptic symbol notifying you that you should stop reading from left->right and start reading right->left - a jarring paradigm shift - .... and all because the author has seemingly not been able to wrap his head as far around the LISP paradigm as you have and would rather write procedural-looking code...or is patronising you by assuming the reverse... end rant :-)
@@julesgosnell9791 while I agree that "less is more" generally speaking, my eyes (trained by years and years of reading and writing clojure professionally) tend to fair better parsing a threading macro form rather than nested forms in most cases with more than two function calls, mostly because such macros do the job for me of letting the forms boundaries "pop off". And if I eventually use nested forms I tend to use newlines / indentation to improve the EDN parsing speed of human eyes (or maybe just my eyes, no need to generalize). All in all it is, I believe, personal preference and possibly more of a style choice (apropos: github.com/bbatsov/clojure-style-guide#threading-macros), but as a bit of a tongue-in-cheek counter argument: why, when you mastered homoiconic languages and macros, mixing thread-first and thread-last together to fit LISP forms into each other like puzzle pieces, would you ever consider the more mundane nested forms more lispy? :-)
It's become abused culturally. Along with the map-itus. They forget to abstract arrows into selectors and end up churning out reams of arrow macros doing the same thing over and over again. I consider them syntactic saccharine since they actually cause things in the body to be missing one parameter. Do they have a place? Sure in small isolated pockets, when half of your code is the same arrow macro pasted over and over it's hot garbage.
As Cardano cryptocurrency uses Plutus, a dialect of Haskell and many modules of the nodes and overall system is written is Haskell, Dapp'S development in Cardano is a pretty good opportunity for Haskell programmers. PS 1.0: If you use a for loop inside another (exponential time, ought!) you better have proof of a theorem stating that there's no other solution to the problem ! PS 2.0 : Your MIC is clipping a bit in this video, please watch out for the gain for the others videos!
I don’t find Haskell intuitive. The C++11 solution is my favorite with two for loops. The more terse and encapsulated a solution is, is not necessarily better.
For very simple problems like these, nested for-loops add a lot of extra boilerplate. Finding the maximum of a container, or summing it up, are such a common operations, there is no reason not to use the built-in, single-line functions. The intent is crystal-clear -- unlike a for-loop, where stuff can happen that has nothing to do with the task at hand However, I am also not a huge fan of trying to twist and abuse problems to fit a certain "style" of coding, if another approach results in more readable code. If I'm writing lambda expressions inside lambda expressions, chances are whoever has to read that code will need much more time to correctly understand what the code is written to accomplish, and what side-effects it may have. In these situations, for- or while-loops (even nested:) may be more readable. Or at least that's what I think whenever I encounter code written by somebody who tried to be extra cute that day and decided to do the complex thing in "one line".
I love the way you talk about the "beauty" of a solution. It's that sort of thing that reminds me why I love programming. Thank you :)
I prefer the haskell solution to the APL one, because in haskell everyone can get a clue of what is going on whereas with APL you have no clue what you are looking at.
@@carlislenightingale8853 It does require knowing what two symbols mean. But the Haskell is in a similar boat because very few people would expect reading it what the compositional operator is actually doing. For people who don't know it's essentially modifying the order of operations.
JavaScript would be a=b=>Math.max(...b.reduce((c,d)=>c+d,0) which I think is a great balance between simplicity and readability
i use C++ for high performance code , Python for interfacing and Haskell to keep my ego in check
@@TheRustyCrabwhat is your problem
this
Look at him, bragging about his checked ego!
I use Rust for all of the above
(I'm being held at gunpoint)
As a guy who inherited application support for 500,000+ line code base of C and a couple 2-3000 line APL applications i can tell you, I'd much rather see the word MAX written out rather than have to look up every single glyph i stumble upon at 3:30am on a Saturday morning.
Thankfully I've moved on from that job, but the main issues was always taking down one of those special keyboards with 4-5 symbols printed on every key.
I agree. I found APL to be "write only". I couldn't even read and understand my own code after a few days.
Could someone enlighten me on some history of APL?
It looks so much like an esoteric language, so why is it in actual industry code? Do people just love it that extraordinarily? Does it have some unique killer-feature, or what?
@@mskiptr
"APL since 1978":
dl.acm.org/doi/abs/10.1145/3386319
Also yeah, we love it.
People actually wrote applications in APL? That too 3000 line applications? Wtf
The APL solution seems to be the most straightforward if i figure out where is the max symbol in my keyboard 😁.
@@alhabshi3k I've been playing with both and prefer the old character set. It was developed by a Harvard professor who was frustrated with the inconsistencies and practical difficulties of conventional mathematical notation. He won the Turing prize for this work. The glyphs are mostly mnemonic and are easier to read than J once you get used to them.
Which do you prefer:
two plus two equals four
2+2=4
In the best APL version there's an IDE with keyboard shortcuts and point and click buttons. You quickly learn the most common glyphs, and you are writing so much less code that it's really a non-issue. If you get hooked, you can buy a dedicated keyboard for £100.
@@alhabshi3k ⌈ is Ctrl+s on the Dyalog keyboard. Googling 'APL keyboard layout' gives some nice diagrams
On my Portuguese keyboard, a curly brace ({ or }) requires pressing 3 keys. Think about how often I need to type that in C++, Rust, Kotlin, Swift, etc. This is one of the reasons that I am now taking a hard look at Haskell or any other ML.
In the Scala solution, it is recommended to skip the curly braces. What you are doing here is creating a block. Blocks return their last expression. This block only has one line and is thus unnecessary.
In the Haskell solution you could just do `foldl1' (+)` and `foldl1' max` to make the reductions explicit
If you don't mind spicing the type signature up you can do this in Rust:
fn maximum_wealth(v: T) -> Option {
v.map(Iterator::sum).max()
}
The code in this one looks a bit nicer but the data type are just crazy and you have to parse the vectors first as an iterator.
Also notice that by returning an Option you can get rid of .unwrap(). This has also the benefit of returning None when your outer Vec is empty and does not throw an exception like other languages would
Would be interesting to see benchmark between them. Especially the c++ versions and rust (I work as a c++ developer as well).
All the shine would be washed out, as 1 realizes how much is paid for them.
@@MrAbrazildoI don't know enough about c++ template( it think the std:: accumulate, max, ect is using templates) to say if the compiler will properly optimize them. But 1 think for sure. The good old for loops will obliterate all other solution.
I liked the very first one. Very imperative, very procedural. Easy to follow.
The Haskell one was nice too, clean and probably as readable for experienced haskell/FP programmers. But the "improved" C++ ones...
The composition operator is just a function, so that makes 4 functions in the Haskell solution.
Python: return max(map(sum, accounts))
Nice, that is nicer than mine: github.com/codereport/LeetCode/blob/master/0217_Problem_1.py
@@code_report I like comprehension more than map, cos it's more explicit and readable.
@@henrywang6931 I definitely could be wrong, but I think the expression in code_report's solution is a generator rather than a comprehension list.
@@AM-qx3bq yeah the sum can take any iterable so generator would work too. A generator can be declared with a comprehension.
i'm currently learning QNial, fun language, APL's cousin (github.com/danlm/QNial7)
QNial : max byrows sum accounts
For me readability is king, only when performance is an issue I would go with more technical solutions. When working with a team, this kind of fancy code tricks make everyone's life a nightmare debugging. Although it is nice to see the possibilities.
What do you think is unreadable here? Maybe functional C++ solutions, because of a gazillion of namespaces, but otherwise everything is pretty clear. Yes, even APL (in this case at least)
I find the python solution for this pretty nice:
def maximumWealth(mtx):
return max(sum(x) for x in mtx)
For me readbility is the king, Haskell, Rust are what I like the most.
For some languages I suppose the programg just crash if the array is empty
I've been around a long time, learned COBOL on a Burroughs mainframe in the 80s and do cloud architecture and security now. I knew about APL, Haskell, Scala and Clojure but had never seen a solved problem before. All you hear about these days is JS and Python and maybe Go. Thanks for the video, you presented and explained the material very well. It clicked with me vs. a lot of the other coding channels. New subscriber.
How would the task be solved in COBOL?
@@user-tk2jy8xr8b It would be the most verbose solution you can ever seen
Cleaner Clojure solutions are available using transducers.
(transduce
(map (partial apply +))
max
input)
here are two one-liner solutions for clojure:
1. (reduce #(max %1 (apply + %2)) 0 accounts)
2. (apply max (map (partial reduce +) accounts))
I'm learning ocaml and here's the solution I would use (with the Batteries library):
let maximumWealth = Array.(map sum %> max)
Without the Batteries library you don't get sum or max, but they're easy enough to define in terms of folds:
let maximumWealth m =
let max arr = Array.fold_left max arr.(0) arr in
let sum = Array.fold_left ( + ) 0 in
m |> Array.map sum |> max
more elegant clj:
(->> data
(map (partial apply +))
(apply max))
Scala solution is most readable and elegant at the same time. It seems like a pseudo code.
accounts.map(_.sum).max
Even you don't know scala you can read this code and understand what it does.
@10:40 that's because you hardly EVER just take the max of two elements in mathematics. Usually you want a max over some array or set, so you always need to feed ⌈ into some combinator.
In other words, because we don't teach combinators in schools, a single-glyph for the "max" binary operation simply isn't very useful.
In much the same way, mathematics notation doesn't actually use the symbol "+" for the equivalent of APL's `+\`, instead they use something like an upper case Sigma with explicit range bounds etc.
I would agree that it would be very nice to teach combinators in middle school, and even teach kids with stuff like Haskell or APL.
I watch these so I can get confused, then I can get my brain adjusted to how much I don't know. Lovw the content.
Very cool stuff! But I have to disagree with your ranking here. IMO, Rust is CLEARLY the winner, just because it's the only one that forces you to explicitly state what you want to happen if the sequence is empty. In practice, that's a huge achievement. And although it's doing that, the implementation is still very clean and readable (and also easily parallelizable).
Exactly! Just add rayon, use par_iter instead of iter and you basically get safe threading for free!
In Haskell, all you'd have to do is change the type to Data.List.NonEmpty, and that corner case wouldn't exist (relocated to input validation where it belongs). It's also the difference between Monoid (e.g. Sum) and Semigroup (e.g. Max). And yes, they're easily parallelizable (start with Control.Parallel.Strategies).
Simlarly, the complaint that the reduction wasn't exposed as the same operation could be trivially dealt with using foldl1 or sconcat. Partial functions including head and tail are still controversial, and it's possible to hide them (e.g. use classy prelude). Still, having exceptions beats silently regarding asm as safe imho.
Perhaps the most infamous partial function is divide. Does rustc also enforce nonzero divisors?
in c++ with funlib this is just
const auto wealth = stream(accounts) | map( fold( add, 0)) | max();
Haskell version in which "maximum" is last because it happens last:
map sum >>> maxiumum
in which each sum is computed in parallel:
maximum . parMap rseq sum
in which the two reductions are explicit:
ala Max foldMap . fmap (ala Sum foldMap)
Doesn't ghc have a flag to parallelize automatically? As a pure functional language it shouldn't need to be requested in code.
Maximum doesn't happen last, it happens progressively. Maximum begins and upon requiring the sum of the first list element, a sum of one list begins, and so forth. Depending on the selected numerical type, the sums might progress together too, though I couldn't give you an implementation of that type and it's NUM instance off the top of my head.
@@noomade I do, check out my UA-cam channel by clicking on my avatar!
@@tricky778 It's theoretically possible, but in practice parallelizing everything makes programs slower, not faster, because it introduces some overhead. That's why annotations are currently needed. In theory a sufficiently smart compiler might be able to only parallelize where it improves performance, but that's much harder!
@@haskell_cat check out HVM, a massively parallelized runtime for pure functional languages
For-for loop and left-folding are single-pass algorithms. Mapping and then searching maximum needs two passes.
2d array to number folding in clojure: (reduce #(max %1 (apply + %2)) Long/MIN_VALUE accounts)
Second reduce not necessary. Clojure's max function takes variable number of parameters.
Can use apply instead
Oh, and why apply over reduce? stackoverflow.com/questions/3153396/clojure-reduce-vs-apply#:~:text=The%20beauty%20of%20apply%20is,work%20with%20variable%20args%20case.
@@bediakogeorge4488 nice quotes. I don't like java-world constant in my solution. You help me to avoid it at all.
1. I want to illustrate why fold is more preferred than mapping and searching => (reduce ,,,)
2. Reduce => I expect to see custom (fn [acc el] ,,,) and maybe a big transient accumulator
3. When I already have a function in my left hand and a collection in the right hand: cons-this-f-to-list-and-eval ~ apply
------------------------
4. You ask why I prefer one function over other in case they both work. Because of link, I assume you know classic answer "it's not about speed, it's about the way of thinking".
Speed:
- dropping intermediate results is anyway more important
- usually you don't care if they have a little difference in speed because faster collections = faster computing (I bet matrix libs or java primitive arrays with areduce are faster)
- unchecked-math magic can change everything
Comfort of a man who write both functions in one top-form:
Reduce is like a contract "I'll convert collection to result". Do it. Why you need second reduce if you have only one collection?.
@@LionKing-qp1lk I didn't understand: You help me to avoid it at all. -- are you saying there's a refinement possible to remove the Long/MIN_VALUE ?
The rust's one is very easy to read! awesome videos
The APL solution as someone who doesn't know that language or someone who is new to the language is horrendous. Not to say the C++ solutions are ideal but the for-loop one would be very obvious as to what it's doing for anyone even reasonably familiar with programming. (The rust solution also)
I think the Haskell solution is probably the cleanest as someone who has never even programmed before might be able to figure out what it does somewhat.
But even if you were Dennis Ritchie himself you wouldn't have the slightest clue what the hell the APL solution is doing without already being very familiar with the language. Less lines of code doesn't always mean better code.
The question is ill-formed because what happens if the array is empty? Rust makes this problem apparent because you called unwrap. It also showed why I think algorithms in C++ is not entirely useful because it requires so much ritual to get anything done that you question why you did it in the first place. Without pipes, the control flow is not entirely obvious. Admittedly, some of these are done for performance reason, namely, entering the begin and end pointer makes sense in low level terms, but not entirely natural in the way piping are done since something like this is usually achieved by performing a filter instead.
In APL, if the argument to this particular program was an "empty array" (array whose dimensions were zero by zero, or 3 rows by zero colums, or zero rows by 3 colums) the program wouldn't crash, it would simply return you an array of dimension zero by zero
@@tcnymex that sounds....useless and at best sounds like an honest mistake. Max operation simply does not have an identity so its literally impossible to call Max on an empty array. I have spent many nights pondering upon this problem but I am simply too stupid to come up with anything clever beyond returning Option
@@cataclysmwarshulduar One can count negative infinity as an identity for max. But adding a negative infinity to your numbers is pretty much like adding a None value with Option, so yeah..
the APL solution looked like a trick, similar to those you can do in C, and everyone tells you not to do them, but you still like them and do them. I agree with you that there should be single char operators for mix/max as they are used too often. The single char reduce operator was the division operator and this is confusing if you don't know APL, which I don't, but I get it. Nice.
You can write the Haskell solution in Clojure!
(def maximum-wealth
(comp (partial apply max) (partial map (partial apply +))))
"apply +" is needed in Clojure to use + in vector form
"apply max" is needed to use max in vector form
"partial" is needed as Clojure does not curry functions
"comp" is the Clojure way to do function composition
Having said that I believe your Clojure solution is more idiomatic and the Haskell solution is cleaner than either Clojure solution!
Fortran: maxval(sum(x,dim=1))
full program:
integer :: x(3,3) = reshape([3, 7, 2, 8, 10, 2, 3, 3, 4],[3,3])
print*, maxval(sum(x,dim=1))
end
My Solution in Rust would be
accounts.into_iter().map(IntoIterator::into_iter).map(Iterator::sum).fold(0, i32::max)
yes we could use the advantage of .max(), but it return an option and I don't really like unwraps, you could do a unwrap_or(0) but thats one more line,
fold is as readable in my opinion and let you directly place a default
using C# linq (assuming input is the given matrix:
input.Select(acc => acc.Sum()).Max();
or alternatively
input.Max(acc => acc.Sum());
another solution for Clojure
(apply max (apply map + accounts))
The most beautiful & readable solution in my opinion is the Rust one.
I really like the Rust as well 👍
Rust is the best
@@noomade I find the haskell one more readable, but the Rust one more efficient and fully safe, as the possibility of an empty list must be handled.
I never used Haskell, only idris, but I'm sure there's a type for non empty list. Which will force the caller to deal with wrong input and not the callee.
It's better imo, because having to handle error at each step can be very repetitive.
I'd go as far as saying that for procedural solution. An assert(size) would be better. After all, we do not expect an empty list to be passed to this function.
If somehow an empty list is passed, its not our( the function) responsibility to check it. Its the part of the program that calls the function in the first place
In JavaScript you could do a=b=>Math.max(...b.reduce((c,d)=>c+d,0)) which I think is a great balance between simplicity and readability.
You unwrap b with ... after calling reduce on b which takes two arguments: a function and an initial value. c is our sum/initial value and d is each element of b which is then added to c which becomes our new c next iteration.
(For me) Haskell is the most elegant language. APL just has massive readability issues (for me?). The outlined good point about APL (i.e. explicit 2 reduces) might be a bad advise when considering from the perspective of other langs.
reduce is just too general and its existence in code is problematic (from readability standpoint) IMO, I always prefer to use something more restrictive. Let me illustrate that with an example here is the solution just using reduces.
maximumWealth = foldr1 max . foldr (\x xs -> (foldr (+) 0 x) : xs) []
Everything here is a reduce (foldr is Haskell's version of reduce).
i.e. maximum is a reduce (maximum = foldr1 max) (foldr1 is just like foldr but uses the last element of the list as base value)
summation is a reduce (sum = foldr (+) 0)
mapping is a reduce (map f = foldr (\x xs -> f x : xs) [])
put another way, when you encounter reduce first you have to figure out what it is doing then you understand the solution (which is the same problem with for-loop). That is why I don't like to use reduce ever as a general rule.
I say the solution, where there is no reduce, is far simpler to read - maximumWealth = maximum . map sum
It is worth noting that maximum has to be a partial function (throw exception or return something like `null` for empty list). Some people (including me) consider using it a bad code. In Haskell, you could use total alternatives, e.g. safe package:
maximumMay . map sum
.. or use non-empty list type
It's interesting that the OO languages have effectively given up on their claims that OO is the solution to issues of reliability and complexity. Almost all the new features in C++/Java/C# are watered down functional ideas. So why not just use a proper functional language like Haskell or F#?
Performance 😢
I think because first performance and memory. Functional programming tends to use a lot of recursion and immutable.
And 2, because FP and PP each have their strength depending of the problem: for the problem in the video, FP would be the way to go. Its really adapted and readable. The haskell one was the cleanest by far.
Pure procedural, basically the first C++, which was kinda a C solution. Was clean and easy to follow to. And imo, easier to modify if the problem changed a bit.
But for some problem, like the ones that need you to look at neighbors in an array, PP would usually be cleaner. I'm eager to know if a better way exist in FP, but the +1 is the best by far with my (limited) experience.
I don't see the point (and difference) with the Haskell and Apl solution. Its the same, but someone decided to rename max, maximum, map, with weird characters. By unit of code. Both are 4 longs.
becouse modifying state will always be faster than copying it
solution of same in my language to that i had created:
array vec = [
Array[4,5,6],
Array[6,7,8],
Array[88,88,0],
]
loop len[{vec}]: uelem vec:$$I = sum[vec..$$I]
arr richesIndex = indicesof[{vec},max[{vec}]]
print [richesIndex]
I love videos like this! I think Java would be a worthwhile addition to these...especially if you're going to do 3 different versions of C++ lol. Java streams results in a solution that is _very_ similar to Rust, but possibly more readable (assuming UA-cam doesn't destroy the formatting lol):
int max = Stream.of(accounts)
.mapToInt(a -> IntStream.of(a).sum())
.max()
.getAsInt();
In my opinion, in the second line it's more concise and readable to do .mapToInt(i -> i).sum(), instead of invoking the IntStream method. Although you do need to know why the mapToInt(i -> i) is necessary - to convert Stream to IntStream by automatic unboxing.
@@martinlukas6097 I had to go back and think this through again haha. My example is expecting "accounts" to be of type int[][] (i.e. an array of int arrays). So, the IntStream.of(a) call is taking an array of ints, converting it to an IntStream, and then summing them. I'm not sure where calling mapToInt(i -> i) could fit in there and have it still compile. mapToInt() expects you to map each item in the stream to a single integer, so calling sum() on it won't work (sum() is a reduction that only works for IntStream).
I suppose you technically could make the Rust solution point-free if you use the function form of the .iter method if you're ok with an extra map:
accounts.iter().map(Vec::iter).map(Sum::sum).max().unwrap()
My Rust is a bit rusty though, so I'm not sure if this'll work
"My Rust is a bit rusty" 🙂
I know it's not intended but it's a nice pun nevertheless.
That's not point-free though except in the lambdas, you're still naming accounts explicitly, aren't you?
Excellent! My father worked at IBM at same facility as Ken Iverson, inventor of APL. My father noted how Ken had his own printer, which in the 1970s was bigger than a fridge and cost a fortune. Why? “Because he invents things.”
Now, can you please explain monads?
They're just monoi... Sorry
Monad is just a thing you can use a flatMap with, that's all :D Think it through. Sometimes its called bind, chain or is hidden inside a language feature like async/await in JS, but it's still the same.
Ugh, the "monad" drama is so boring. Oops, the m-word drama, considering the monad word is forbidden in some "circles" 😄
@@witoldsz80 frankly a statement in c++ is an element of a monad isn't it? A function call is flatmap, semicolons between statements are the monoid operator.
Wow, haskell and apl are so beautiful. Also, now I understand why some coders loved haskell during advent of code
I don't like that APL one at all. Seems fine in isolation, but I reckon it would have the Perl problem of a script being a cluster fuck when you come back to it years later
In this example the problem fits perfectly the language (APL). In every day programming rarely you will find a mathematical problem like this. What about a web server connected to a MySql db ?
I am by far no expert, but AFAIK, the Dyalog implementation of APL has .NET interoperability. Also, a Database table is more or less a matrix for APL, so you could actually use its strong array capabilities for processing databases.
95% of the PHP code I write for boring REST CRUD app is based on collections and calling .map(), .reduce() and .filter() on them. This is more everyday stuff than than most other problems
We can get a functional definition with JavaScript too! With:
const maxWealth = compose(max, map(sum));
We just need this simple helper functions:
const map = fn => list => list.map(fn);
const sum = list => list.reduce((acc, val) => acc + val, 0);
const max = list => Math.max(...list);
const compose = (...fns) => arg => fns.reduceRight((value, fn) => fn(value), arg);
From all C++ versions I'd choose that with two for loops. It's very readable. C++17 is the worst IMO. Too many technicalities exposed.
yea, he has no clue what he's talking about. "i love algorithms". he's gonna have a heart attack when he discovers all algorithms are made out of compare and conditional jumps :D
Agree, though i could be swayed to one of the newer versions for the built-in parallelism. Since everyones phone has 4 cores to throw at the problem. Lol.
@@hansiraber why is that a problem?
That's exactly what I thought. The one with for loops is simple and easily understandable. Having said that, I'm an engineer who programs things, rather than a software developer, so I'm not particularly familiar with modern c++ features. Neither are the people who might try to read or maintain my code though, so for loops seem like the way to go for me. I don't really get the benefit of the newer features.
Don't for loops expose the technicalities way more?
I mean, the only thing closer to the actual implementation and farther from the underlying idea would be explicit go-tos and conditional jumps…
IMO, Haskell version is the most readable because it explicitly states we want the maximum of sums. I.e.:
maximum = maximum
of = .
sum = sum
s = map
The C++ 'algorithms' versions are kind of halfway through. Like if someone inlined the definitions of maximum and sum by hand.
But loops are one step of inlining farther. And bring mutability to the table - which is probably the actual reason he dislikes them.
i think the haskell and APL solutions dont look that beautiful. They are cool for sure though. For me beautiful is easy to understand with minimum LoC. Haskell soln might be easier to understand but APL is just crazy.
Solution in Factor would be very close to Haskell, because it's point-free (local variables are implemented in a library):
[ sum ] map max
It's all fun and games until you have to debug 1000 lines of APL and it's just made out of characters. :) Though very informative and good video. Thanks
Now imagine having to debug the same program, but it is 100,000 lines of C++. And I’m not defending APL in this, I am honestly not sure which would be the greater pain in the ass.
Often your application / systems code will be lots of names (which you have created yourself) and be just as readable as any other "traditional" language. Just like any other language, you can write code that is unreadable for yourself or others, or you can write code which is perfectly readable and easily maintainable. The power of APL comes when you have to solve problems for yourself, and suddenly you find the relatively small collection of versatile primitives is quite satisfying to work with. On the topic of readability, this bit of conversation had some interesting points chat.stackexchange.com/transcript/message/55173403#55173403 - as an APL chat room, the topic of "why does every think APL is unreadable" comes up every now and again
In clojure + is a variable arg function and calls reduce internally for more than 2 arguments. So below is enough
(->> data
(map (partial apply +)) or equivalent (map #(apply + %))
(reduce max))
or (->> data
(map (partial apply +))
(apply max))
or even (comp (partial apply max) (partial map (partial apply +))) for a point-free solution
all i learned in the first 5 minutes is that i don't fucking know C++, i'm gonna go cry now
A program is beautiful because it is beautiful, not because it is short. I'm fine with people playing code golf but not the entire world is a golf course.
Kotlin
val accounts: List
val res = accounts.maxOf { it.sum() }
Very nice. I hate the closure snippet. I can only imagine the compiler errors that I would be getting trying to write something similar.
Something I was playing with recently was writing sha256sum in C, C++, Rust, Python and other languages. I had no idea where to begin writing it in Haskell, so googled around and took a peek at what someone wrote earlier. For that, the Haskell ends up verbose and ugly compared to C and Rust.
What about safety, if the input is empty would all of them crash? In rust, because you used unwrap it is clear. You could have returned a result so the caller can decides how to handle the error
In APL, if the argument to this particular program was an "empty array" (array whose dimensions were zero by zero, or 3 rows by zero colums, or zero rows by 3 colums) the program wouldn't crash, it would simply return you an array of dimension zero by zero
In every language you can handle the error if you want to using conditional flow. This causes undefined behaviour if the error is not handled. In Rust you have to handle the errors mandatorily.
Elixir looks pretty good:
wealths |> Enum.map(&Enum.sum/1) |> Enum.max
Damn, thanks a lot! I was just recently thinking maybe I could make a vector calculator with what I had learned and combining this problem to what I had I actually managed to make a pretty neat program that prints out a matrix. Next would be deciding if I should try and use these methods for the arithmetics.
Noteworthy: #include for the C++20 solution, it's not you need to import despite the name std::ranges:: and std::views::
Scala shorter solution: val maximumWealth: Array[Array[Int]] => Int = _.map(_.sum).max
I know I'm late on this video,
but the Scala solution could have been way shorter,
actually you can declare it as a simple lambda expression :)
val maximumWealth: Seq[Seq[Int]] => Int = _.map(_.sum).max
C++, using FunctionalPlus:
auto maximumWealth = fplus::fwd::compose(fplus::fwd::transform(fplus::fwd::sum()), fplus::fwd::maximum());
will work on any container, and any numeric type.
I don't know if it is actually any better, but in Scala you could also write it as
accounts map(_ sum) max
Could you do a performance comparison with larger arrays, such as 1,000,000 x 1,000,000? How much memory is used? How large are the executables, etc. Thanks! Really interesting video.
My favorite was the haskell one, then clojure, then rust.
The haskell solution is very elegant and easy to read, the clojure one is a bit verbose but I like how simple the language is. Rust is a good middle ground with more procedural low-level style.
The APL is beautiful, but it's not easy to read unless you are familiar with it. People already have that problem with haskell. I wish math wasn't such a boogieman in education.My favorite was the haskell one, then clojure, then rust.
The haskell solution is very elegant and easy to read, the clojure one is a bit verbose but I like how simple the language is. Rust is a good middle ground with more procedural low-level style.
The APL is beautiful, but it's not easy to read unless you are familiar with it you wouldnt be able to figure it out. Haskell, if you figure out the top is the signature, you can get an idea just by reading it. People already have that problem with haskell. I wish math wasn't such a boogieman in education.
wakes up:
1) find a problem which solves one line in haskel
2) praise haskel
In 3:09, how did you achieve this transition in the video? is it manual editing or did you use a special program for that?
Microsoft Powerpoint 2019 "Morph" transition
thanks!
Best example of the elegance of APL. Those 4 symbols literally express the essential mathematical solution; all the rest is noise.
(N.B. This comment is about ADSP podcast; didn't see anywhere to leave a comment on an episode on the podcast page)
Conor,
My ears perked up when you said any scan is parallel-isable
The the formal definition of plus scan a series of numbers like iota 4 equal to 1 2 3 4 is a series of reductions
+\ iota 4 is:
(+/1),(+/ 1 2),(+/1 2 3),(+/ 1 2 3 4)
Because plus is both commutative and associative you can parallelize the +/ first, and then run the +\ as four +/ on four separate cores each of which is running the plus reduce parallel algorithm
But that only works for things that are both associative and commutative like plus or multiply, it wouldn't work for things like divide and minus scan ( you could, of course, run each of the four ÷/ required for a "÷\ iota 4" on a separate core, but I don't see how you could parallelize the ÷/ itself)
But of course, being an APL guy, I let The interpreter do all the smart optimization, so I very much look forward to gaining some insight from your upcoming podcast episode discussing parallelizing scans.
T.C.
If you happen to see this, any particular reason you don't like the inside-outness of the C++20 solution while being fine with it in the Haskell and APL solutions? If I followed correctly, all three are essentially max(sum(matrix)). I totally agree about the Haskell and APL solutions being far nicer. Was just curious if you'd happen to consider that or if it was a gut thing.
In Clojure, you can easily define max to be a single character :)
Just like you, in my day-to-day work, I avoid for loops, while loops, and do while loops like the plague if possible. I'm a JS developer by day, but I'm learning Haskell on the side. The more I work with Haskell, the more I love it.
Fully functional in Clojure: (comp (partial apply max) (partial map (partial apply +)))
Question: Is the later versions of the c++ solutions that are parallelized actually faster? The original solution is easily optimized by a compiler. Is the parallelization basically threads? Because I really think some AVX/SIMD and some forloop unrolling will do a better job speeding things up vs the overhead of dealing with threads for such a simple problem. If I've learned anything from pushing performance in C it's don't obfuscate what's actually happening, select which optimization strategy you actually want for the problem starting with very vanilla code, don't get in the way of an optimizing compiler, know your compiler flags because what people think is optimized (-O3) is nothing compared to what you actually get for free by throwing in a few more flags (as long as let those optimizations work by not giving it already optimized code that's been optimized in the wrong direction.)
Because of that I've learned to see any time someone says "this code pattern is always faster.. because this feature", I can't help but say yeah right. I'm sure it's really fast. For example, most time when people use a high level utility to speed something up you get at best 4x speed minus overhead.. hopefully. I've seen that letting fully flagged optimizing compilers work their best it can speed things up ten fold on the regular. If you have really modern hardware that's only going to increase as there are more features to leverage. That's why not getting it its way is rule one for fastish code at ease.
The apl solution is completely unreadable unless you know the language.
Clojure have transducers, they may do it more expressive and shorter
why the dislikes? i'm so confused it's a great video
Good question, I would like to see what people doesn't like.
Or maybe just some programming language fanboys.
I think the problem was very simple. Look at Tsoding
ua-cam.com/users/Tsodingplaylists
Cool. Now do an http request, and find all instances of /get_file/ between single quotes and rank the results by an integer found within the string. Then try to convince people that the haskell that results from that is beautiful. Javascript, scala, go, and rust will do the best in that order. Then clojure, c++, haskell. I don't know how to do that in APL. It would be very cool if it stays elegant. Then I will have to agree APL is cool if it can do that. Not having 50 bajillion text types might help.
JavaScript: Math.max(...accounts.map(a => a.reduce((s, e) => s + e, 0)))
People are bothered by the smallest things lol. Some will prefer to not have to memorize a new symbol and instead use a very short word like min/max, but definitely to each his own. It IS interesting to have a symbol for those.
I will submit a Groovy one-liner that strikes a nice balance of readability and elegance: def maximumWealth(def accounts) {
accounts.collect {it.sum()}.max()
}
In my opinion, very few of these solutions are actually explicit in what they are doing. If you did not have background knowledge in these languages, you would see these solutions like hieroglyphics.
I understand that this video is meant to show exaggerated differences between the languages, but in my opinion good code explains itself. This is not a complicated operation, so it should be clear what is happening for it to be truly "elegant" in my eyes.
I think the best solution for C++ is the first C++11, but using algorithms to do the sum.
What did other languages than Rust do when you call max on an empty sequence?
They throw an exception. Or at least Haskell does. But there is not really another sensible behaviour anyway
That really is the key question here! And that's what puts rust lightyears ahead all others in this comparison. It forces you to consider that edge case.
APL returns 0. I'm kinda disappointed because it gracefully handles many complicated things in ways that make far more sense than that.
@@nicotollenaere1587 But only because the maximum of list of numbers is "difficult" to define. The map should not be a problem...
@@danielsimon424 I'm not sure what you mean by that. You could imagine that max would work along with a typeclass Ord' that also defines a minimum element, I guess. In this case max [] (if [] has been inferred as an integer list) would return MIN_INT. I'm not sure most people would find that natural. Really, there are only two sensible choices : either give it a type max :: [a] -> Maybe a (which is the same thing as in Rust) or just throw an exception. It's a matter of taste.
Agree that reduce is the right abstraction, even though there are dozens of possible solutions. Another Clojure solution:
(defn maximum-wealth [accounts]
(reduce max (map (partial reduce +) accounts)))
The threading macro is nice but not really needed here, usually reserved for simple function application where each successive return value fits into a simple additional function call. 'partial' always seems cleaner than anonymous functions.
// The JavaScript solution isn't far off from Scala:
const maximumWealth = accounts => (
Math.max(accounts.map(sum))
)
// Although you do have to BYO sum function:
const sum = _ => _.reduce((a, b) => a + b)
Here is another Clojure solution using no anonymous function and no loss of argument visibility from the arrow macro.
(defn max-account [accounts]
(reduce max (for [i accounts]
(reduce + i))))
I don't get why you dislike the inside-outness of one solution, but are fine with the right-to-leftness of others. We're talking about functions; they're the same thing -- one just has parentheses.
С++ somehow seems the most readable one
Python:
max_wealth = max([sum(customer) for customer in customers])
the most readable and simple of solution of all. imo it rivals haskell.
Python
solve = lambda m: max([sum(x) for x in m])
matrix here as a list of lists of numbers
I really wish you picked a nontrivial problem
Would we not need to add *std::execution::par* to run it parallel in the reduce/transform_reduce functions?
In my opinion, clojure and rust it's the beautiful solution
My only thought is that the Haskell is pretty much the same as the Python: max(map(sum, accounts)). But, the python is more readable because the function application is extremely explicit and it's not point free.
What is the fixation with threading macros in Clojure ? I also find the anonymous function shorthand ugly - I much prefer:
(reduce max (map (partial reduce +) accounts))
Alternate: (defn max-wealth [accounts] (reduce max (map (partial apply +) accounts)))
Video went out of it's way implying Clojure did not have a 'sum' function (e.g. a function that can add more than two values together). Of course Clojure's '+' function does exactly that already (+ 1 2 3 4 5)
Once you get used to them, the threading macros are second nature to use and to read as well, and they make for concise clear code. In the beginning I had to stop and analyze code that was using them for a few minutes before it really made sense, but its something you can get accustomed too pretty quickly.
@@okcomputr I agree that they have their uses - but in many cases I find them just adding unnecessary complexity - e.g. the example in this video - why, when we have an established and simple syntax, choose to use a more complex one and turn a two-liner into a three-liner ? - I have often seen this done - particularly in a video that should be showcasing the simplicity, brevity and beauty of Clojure/LISP rather than it's ability to toss in arbitrary transformatory macros ? Why when you have finally wrapped your head around an "inside-out" Polish notation would you then use it to write a macro to allow you to scatter random pieces of "outside-in" code with more complex rules around the use of parentheses than your original ? Why, when you have mastered the concepts of recursion and nesting would you want a macro that hides all of this ? It is like trying to read a novel where every few lines you come across some cryptic symbol notifying you that you should stop reading from left->right and start reading right->left - a jarring paradigm shift - .... and all because the author has seemingly not been able to wrap his head as far around the LISP paradigm as you have and would rather write procedural-looking code...or is patronising you by assuming the reverse... end rant :-)
@@julesgosnell9791 while I agree that "less is more" generally speaking, my eyes (trained by years and years of reading and writing clojure professionally) tend to fair better parsing a threading macro form rather than nested forms in most cases with more than two function calls, mostly because such macros do the job for me of letting the forms boundaries "pop off". And if I eventually use nested forms I tend to use newlines / indentation to improve the EDN parsing speed of human eyes (or maybe just my eyes, no need to generalize). All in all it is, I believe, personal preference and possibly more of a style choice (apropos: github.com/bbatsov/clojure-style-guide#threading-macros), but as a bit of a tongue-in-cheek counter argument: why, when you mastered homoiconic languages and macros, mixing thread-first and thread-last together to fit LISP forms into each other like puzzle pieces, would you ever consider the more mundane nested forms more lispy? :-)
It's become abused culturally. Along with the map-itus. They forget to abstract arrows into selectors and end up churning out reams of arrow macros doing the same thing over and over again. I consider them syntactic saccharine since they actually cause things in the body to be missing one parameter. Do they have a place? Sure in small isolated pockets, when half of your code is the same arrow macro pasted over and over it's hot garbage.
As Cardano cryptocurrency uses Plutus, a dialect of Haskell and many modules of the nodes and overall system is written is Haskell, Dapp'S development in Cardano is a pretty good opportunity for Haskell programmers.
PS 1.0: If you use a for loop inside another (exponential time, ought!) you better have proof of a theorem stating that there's no other solution to the problem !
PS 2.0 : Your MIC is clipping a bit in this video, please watch out for the gain for the others videos!
I don’t find Haskell intuitive. The C++11 solution is my favorite with two for loops. The more terse and encapsulated a solution is, is not necessarily better.
For very simple problems like these, nested for-loops add a lot of extra boilerplate. Finding the maximum of a container, or summing it up, are such a common operations, there is no reason not to use the built-in, single-line functions. The intent is crystal-clear -- unlike a for-loop, where stuff can happen that has nothing to do with the task at hand
However, I am also not a huge fan of trying to twist and abuse problems to fit a certain "style" of coding, if another approach results in more readable code. If I'm writing lambda expressions inside lambda expressions, chances are whoever has to read that code will need much more time to correctly understand what the code is written to accomplish, and what side-effects it may have. In these situations, for- or while-loops (even nested:) may be more readable. Or at least that's what I think whenever I encounter code written by somebody who tried to be extra cute that day and decided to do the complex thing in "one line".
APL seems like a language Aliens would use to talk to each other.