You people are _massive_ nerds and I love it. Oh, and you bet I wanna see you do this without ignoring half the problem. I want to see blood, sweat, and tears. :>
Seeing you write (a b c) and then fill it in, was probably one of the best way I have seen forks stepped through. Really helped me to understand the times I tried and failed to make forks.
I made the same solution in python with the exact same method but when I submitted it in Leetcode I get and error and I find the same error in your code. With a list [0, 2, 2, 1, 1] the solution is gonna be 3 and not two. I solved this by a dedup.
Is there an APL way build up a 2^31-1 bitmap, set the matching positive bits, and return the smallest unset value? This would keep you true to the question's constraints.
Why build the giant bitmap when you could build a length n bitmap, which is much smaller? Considering the giant bitmap constant space despite it being way way larger and slower than a length n bitmap?
@@brendanhansknecht4650 your solution isn't constant space. Austin's solution is constant space, but doesn't work for all integers, only up to 32 bit integers which is a false assumption.
@@argus456 sure, but that is just getting into pedantics. My solution would always use less than or equal memory. Also, Austin's solution would still likely not be constant space in APL because APL makes intermediate state when computing.
@@brendanhansknecht4650 that's not pedantics, it's literally against the terms of the assignment. Extra space used should be constant for growing n. If it's adding a bit for every n+1 it is not constant, even though it is always smaller.
I thought along similar lines, like number of people. Here's some julia code: sm(seq) = minimum(setdiff(Set(1:length(seq)+1), seq)). This is O(N) in time and space (just like all the ⍳⍴-based solutions, written here as 1:length(seq)+1).
Hi conor, my name is lakshyaraj , i want to learn different kind programming paradigms and languages i have chosen c++, rust, haskell, racket, apl and ruby. should I consider any more.
Forth (Stack programing), Prolog (Logic Programing) would be nice additions. And I would also swap Racket with Common Lisp, you get more access with it, and get better metaprogramming capabilities. And you will grasp the whole "Code is data is code" thing that actually makes lisp beautiful a lot quicker I would also add regular C in case you don't know it already. Learning C++ without learning C will probably hinder your ability to understand the Memory Oriented nature of the language (Is too easy to get lost on the OOO aspect of C++) I would add a very small hint of Brainf**k. Because it, at its core, simulates a Turing machine. Using BF for a while will give you a glimpse as to WHY that language is actually Turing complete. And you will have a better understanding of computation in general by using it. I would add something like OpenGL shaders to give you some understanding for what coding in the GPU feels like. But I would be hesitant to call those "Language" or a "Paradigm" I would probably add either R, Julia, Matlab, or Python (Numpy) for the OTHER kind of array programing languages. The ones used for statistics and stuff Finally I would add a little bit of Assembly, since all of your languages and paradigms eventually become just instructions. Understanding then can be very very useful
I think you can get the list of missing ones up until (tally iota), with simply simply having "(tally iota)without identity". Since you'd use the iota of something, and then throw elements out from that, the negative elements will implicitly get ignored, and you don't have to worry about it. Now you only need 2 more extra steps: -a min reduction and you get the smallest of the missing ones. -as a first step,tally+1 needs to be used for the iota's input, so in case the list is complete, the undiscarded remaining element is the tally itself, and if there is a missing one earlier, it is guaranteed to be smaller than the tally anyway. so: Min Reduce((iota tally+1) without identity).
Concise, precise and complete are your strengths -- I always see and love in your videos❤ On top of that, It is simple and easy to follow. Slides are 🔥 Your my inspiration, mentor and guru. Thanks a lot! Hats off. 🙏
At 11:18, for anyone like me who's a newbie and didn't get a tree, you need to first enable boxing with: ]boxing on And for more info on a command: ]boxing -??
Haskell: head . foldr delete [1..] Short Readable Easy to understand Probably not the most efficient solution All in all a typical Haskell solution Thanks for the interesting video by the way
Elegant, but I think it may be O(n²). If I feed it (replicate 100 1 ++ [2..100]), there'll be a stack of 99 filters trying to remove 1 from each of 2..100.
Another approach: if the number to return must be a positive integer, it can only be in the range 1 to the maximum (ceiling reduce) of the input+1. So you just need to have iota(1+ max). Then drop all number of this vector that are already part of the input vector (compress). The first number of the remaining vector is the requred number. so if the input is in variable A (eg. A := 1 2 3 5): 1 take ( not (B:= iota 1 + ceiling / A) exist A ) / B
What exactly is the advantage of using an Array Language like APL instead of let's say C or Python? This seems like it takes soo much longer to code, so what is the advantage of using APL?
It was longer to explain, not to code. Plus the lack of the sorting primitive which could be imported from a library as in case of the C++ solution (not shown). But this is not typical. If one knows APL primitives, writing it is quite straightforward. Here is a solution in the J language: :i.1+>./a=.a#~0
The advantage, as usual when looking at a higher level language, is concentrating on the task at hand instead of minutiae about how to build established algorithms. E.g. here's a different algorithm in BQN for this task: 1(=+⊢)´∨ Sorts an array in descending order, then starting with 1, increments for each item from the right that equals the current value. It has no need for special casing negative values etc. It's O(n log n), though, O(n) solutions are likely a bit trickier. Example O(n) solution: 5e5 { ⊑⊐⟜1 0¨⌾(((1+𝕨)⌊0⌈0∾𝕩)⊸⊏) (2+𝕨)⥊1 } 1‿3‿2‿6e6 Build an array of 1s 2+5e5 entries large. Clamp the incoming numbers to valid indices in that array, and set the corresponding entries (and number 0) to 0. Then find the index of the first remaining 1. This algorithm translates nicely to GPUs. Both are reasonable to read, mostly from right to left. Python can be extended to have some array support via e.g. NumPy, Numba or CLyther. There are extensions to C as well, such as OpenMP and OpenCL, but they tend to still focus on the machine level steps. Personally I see BQN very much as a tool of thought. The short example above is a full solution that can be jotted down in 13 strokes. Iterating on algorithms is much more palatable this way.
So lemme think. In a list of N numbers, you expect values in a range 1 to N. Can't you just delete (or set to zero) all given values that fall within that range from an iota N+1 and then find the first (non zero) value? That's O(N) complexity&constant mem, right? I'll try implementing that in APL, but just started so...
@@DeltaEpsilon7787 So it does work by using the constant 5e5 (maximum size of input array) from the fine print requirements (which were ignored by the video creators).
No wonder you had to ignore the O(n) requirement. The APL code is ridiculous in terms of complexity, each combinator is adding unnecessary time. I am not sure what the complexity of the sorting idiom in APL is, but even if it was miraculously O(1), this code will still be O(n^4). Looks cool and honestly fun to practice problem solving but I don't think anyone should write such code for production. Would love to see an O(n) solution.
How is this O(n^4)? He just: 1. filters out the non-positive elements which is just O(n) 2. sorts the array, we are assuming this is O(1) 3. takes the first length + 1 elements which is just O(1) 4. compares the array with the sequence 1, 2, ..., length + 1 which is just O(n) 5. finds the first non-zero element which is just O(n) So the total complexity of the whole operation is just O(n) + O(1) + O(1) + O(n) + O(n) = O(n) He is doing the O(n) operations like filtering one by one.
I was having trouble posting links/bqn code on the other video, but my solution was essentially. Filter out negatives. Append 0 in case the list is empty. Iota to the max of the size of the list and the max value in the list(so that a list with one giant positive number isn't stupidly slow) + 1. Get a bit list by checking which elements of the original list are in the iota. invert. Get the indices. return the first index + 1.
Hard to be sure without actual code, but I think this is O(n) but not constant in extra space. The bit list would grow with n. If you'd still like to solve it, can you find a way to keep a ledger of the numbers found without creating a second list?
@@argus456 It is O(n) space, just like the solution in the video. I was only offering another solution with the same limitations. APL/BQN are quite hard to use when dealing with less than O(n) space complexity. The issue is that depending on how each operation is written, they can use different amounts of space. Most operations will create intermediate results which makes many algorithms takes O(n) space instead of O(1) space. More a general limitation of array languages and the control they give you. I know a correct O(1) space complexity solution for this problem. It just can not be program nicely in array languages if at all.
@@brendanhansknecht4650 fair enough. I'm usually not too interested about implementation details of certain languages, more about the algorithm itself. But I recognize that this video's comment section is not the right place to hold that opinion ;)
The core of the question is in the space complexity, the rest of the problem is relatively easy. Can you find a way to keep track of the integers found without creating a second array?
More like wingdings. We're already using the Latin alphabet, and written Greek isn't particularly concise either. Funny enough the only parts of the language I dislike are when you need or don't need (), how you can't always use / to filter, and how some functions either result in a single value or an array of a single value, otherwise it's pretty awesome once you learn it!
Something has happened while I was "away" from programming ... readability ... FFS. What the Sam Heck are APL and BQN? Math crap? Esperanto for nerds? Unicode overuse? Some Sadomasochistic urge to constantly be confused by your own code? So many questions. Is it elegant? Does it streamline anything for the average human? Gawd ... I'm either dumb, too old, or both. $0.02
No wonder people use Python: def first_missing(num_list): lowest_missing = 1 for nums in num_list: if (nums == lowest_missing): lowest_missing = nums + 1 return(lowest_missing)
That's a really dumb argument; it's just a different style of a solution. Here's a procedural C++ solution, mimicking yours: int first_missing(std::vector num_list) { int lowest_missing = 1; for(auto num : num_list) { if(num == lowest_missing) lowest_missing = num + 1; } return lowest_missing; } I'm pretty sure you can write a more functional Python solution mimicking the C++ one in the video as well, but I don't know Python too well so I can't write it
"0 iota swap omega member-of swap iota tally omega" is a very simple solution in apl idk why connor made his so complex. apl is great for simd and i think the approaching gpu driven world will highly favour apl thinkers and tinkerers
Fails for a reverse sorted list. BQN form: 1{1⊸+⍟(𝕨=𝕩)𝕩}´∨ somelist The ∨ sorts descending, to match the ´ fold right. That sorting breaks the O(n) requirement by being O(n log n). member-of solutions tend to break it by being repeated into O(n²). A scatter broadcast operation, something numpy etc inherited from APL, can solve that (knowing that n≤5e5).
{ ((1+≢⍵) @ (0∘=)) ⊃ 0~⍨ ({ ⍺×(2-≢⍵) }⌸ ⍵,⍨⍳≢⍵) } I'm sure there's a better way but this is the first thing I could come up with. Basically I created a _hashmap-like_ that has 1 or 2 of every number that is potentially missing, then just finding the first/smallest one that's missing i.e. only shows once, and if the result is 0 because nothing is missing, replace that 0 with 1+input.length
You people are _massive_ nerds and I love it.
Oh, and you bet I wanna see you do this without ignoring half the problem. I want to see blood, sweat, and tears. :>
This comment made smile. Thank you
Oh my word! I thought the title looked awfully familiar :) Great video as always!
Great job
Seeing you write (a b c) and then fill it in, was probably one of the best way I have seen forks stepped through. Really helped me to understand the times I tried and failed to make forks.
I made the same solution in python with the exact same method but when I submitted it in Leetcode I get and error and I find the same error in your code. With a list [0, 2, 2, 1, 1] the solution is gonna be 3 and not two. I solved this by a dedup.
Is there an APL way build up a 2^31-1 bitmap, set the matching positive bits, and return the smallest unset value? This would keep you true to the question's constraints.
Why build the giant bitmap when you could build a length n bitmap, which is much smaller? Considering the giant bitmap constant space despite it being way way larger and slower than a length n bitmap?
@@brendanhansknecht4650 your solution isn't constant space. Austin's solution is constant space, but doesn't work for all integers, only up to 32 bit integers which is a false assumption.
@@argus456 sure, but that is just getting into pedantics. My solution would always use less than or equal memory. Also, Austin's solution would still likely not be constant space in APL because APL makes intermediate state when computing.
@@brendanhansknecht4650 that's not pedantics, it's literally against the terms of the assignment. Extra space used should be constant for growing n. If it's adding a bit for every n+1 it is not constant, even though it is always smaller.
@@argus456 yes, but the other solution when written in APL won't be constant either.
You actually don't need to sort if you take the first value from IOTA after removing all values from the input ;)
I thought along similar lines, like number of people. Here's some julia code: sm(seq) = minimum(setdiff(Set(1:length(seq)+1), seq)). This is O(N) in time and space (just like all the ⍳⍴-based solutions, written here as 1:length(seq)+1).
Hi conor, my name is lakshyaraj , i want to learn different kind programming paradigms and languages i have chosen c++, rust, haskell, racket, apl and ruby. should I consider any more.
Forth (Stack programing), Prolog (Logic Programing) would be nice additions. And I would also swap Racket with Common Lisp, you get more access with it, and get better metaprogramming capabilities. And you will grasp the whole "Code is data is code" thing that actually makes lisp beautiful a lot quicker
I would also add regular C in case you don't know it already. Learning C++ without learning C will probably hinder your ability to understand the Memory Oriented nature of the language (Is too easy to get lost on the OOO aspect of C++)
I would add a very small hint of Brainf**k. Because it, at its core, simulates a Turing machine. Using BF for a while will give you a glimpse as to WHY that language is actually Turing complete. And you will have a better understanding of computation in general by using it.
I would add something like OpenGL shaders to give you some understanding for what coding in the GPU feels like. But I would be hesitant to call those "Language" or a "Paradigm"
I would probably add either R, Julia, Matlab, or Python (Numpy) for the OTHER kind of array programing languages. The ones used for statistics and stuff
Finally I would add a little bit of Assembly, since all of your languages and paradigms eventually become just instructions. Understanding then can be very very useful
I think you can get the list of missing ones up until (tally iota), with simply simply having "(tally iota)without identity".
Since you'd use the iota of something, and then throw elements out from that, the negative elements will implicitly get ignored, and you don't have to worry about it.
Now you only need 2 more extra steps:
-a min reduction and you get the smallest of the missing ones.
-as a first step,tally+1 needs to be used for the iota's input, so in case the list is complete, the undiscarded remaining element is the tally itself, and if there is a missing one earlier, it is guaranteed to be smaller than the tally anyway.
so:
Min Reduce((iota tally+1) without identity).
or rather
First((Iota Tally+1)Without Id).
As Iota's output, after filtering, will remain in order.
I mean if you get rid of the performance requirements this is totally easy right? python:
lambda s : min({*range(1,len(s)+2)}.difference({*s}))
Definitely! Please consider equivalent solution in the APL language: {⌊/(1+⍳≢⍵)~⍵}
I want you see writing dynamic programming problems in Haskell and APL
Concise, precise and complete are your strengths -- I always see and love in your videos❤ On top of that, It is simple and easy to follow. Slides are 🔥
Your my inspiration, mentor and guru. Thanks a lot! Hats off. 🙏
Your link to Asher's video is for a different problem. Here's the correct link: ua-cam.com/video/8S4EUt9HCiQ/v-deo.html
At 11:18, for anyone like me who's a newbie and didn't get a tree, you need to first enable boxing with:
]boxing on
And for more info on a command:
]boxing -??
]boxing on -t=tree
Hi can we use APL language for development of application/Android/ios 🤔
bqnjs is probably one way
Haskell: head . foldr delete [1..]
Short
Readable
Easy to understand
Probably not the most efficient solution
All in all a typical Haskell solution
Thanks for the interesting video by the way
It is quite possible to write almost the same: {⊃⍵~⍨1+⍳≢⍵}
beautiful
@@vadimtukaev6772 close, but this bugs out when 1 is the missing number
Elegant, but I think it may be O(n²). If I feed it (replicate 100 1 ++ [2..100]), there'll be a stack of 99 filters trying to remove 1 from each of 2..100.
Another approach: if the number to return must be a positive integer, it can only be in the range 1 to the maximum (ceiling reduce) of the input+1. So you just need to have iota(1+ max). Then drop all number of this vector that are already part of the input vector (compress). The first number of the remaining vector is the requred number.
so if the input is in variable A (eg. A := 1 2 3 5):
1 take ( not (B:= iota 1 + ceiling / A) exist A ) / B
What exactly is the advantage of using an Array Language like APL instead of let's say C or Python? This seems like it takes soo much longer to code, so what is the advantage of using APL?
It was longer to explain, not to code. Plus the lack of the sorting primitive which could be imported from a library as in case of the C++ solution (not shown). But this is not typical. If one knows APL primitives, writing it is quite straightforward. Here is a solution in the J language:
:i.1+>./a=.a#~0
The advantage, as usual when looking at a higher level language, is concentrating on the task at hand instead of minutiae about how to build established algorithms.
E.g. here's a different algorithm in BQN for this task: 1(=+⊢)´∨
Sorts an array in descending order, then starting with 1, increments for each item from the right that equals the current value.
It has no need for special casing negative values etc. It's O(n log n), though, O(n) solutions are likely a bit trickier.
Example O(n) solution: 5e5 { ⊑⊐⟜1 0¨⌾(((1+𝕨)⌊0⌈0∾𝕩)⊸⊏) (2+𝕨)⥊1 } 1‿3‿2‿6e6
Build an array of 1s 2+5e5 entries large. Clamp the incoming numbers to valid indices in that array, and set the corresponding entries (and number 0) to 0. Then find the index of the first remaining 1. This algorithm translates nicely to GPUs.
Both are reasonable to read, mostly from right to left.
Python can be extended to have some array support via e.g. NumPy, Numba or CLyther.
There are extensions to C as well, such as OpenMP and OpenCL, but they tend to still focus on the machine level steps.
Personally I see BQN very much as a tool of thought. The short example above is a full solution that can be jotted down in 13 strokes. Iterating on algorithms is much more palatable this way.
So lemme think. In a list of N numbers, you expect values in a range 1 to N. Can't you just delete (or set to zero) all given values that fall within that range from an iota N+1 and then find the first (non zero) value? That's O(N) complexity&constant mem, right?
I'll try implementing that in APL, but just started so...
Iota n+1 is not constant memory. But that solution would work otherwise.
@@brendanhansknecht4650 note to self: look up what something means instead of assuming what it is... XD
Constant memory means a fixed amount of used space, i.e. O(1), not O(n).
@@DeltaEpsilon7787 So it does work by using the constant 5e5 (maximum size of input array) from the fine print requirements (which were ignored by the video creators).
Can you put the APL editor you use in the description
No wonder you had to ignore the O(n) requirement.
The APL code is ridiculous in terms of complexity, each combinator is adding unnecessary time. I am not sure what the complexity of the sorting idiom in APL is, but even if it was miraculously O(1), this code will still be O(n^4).
Looks cool and honestly fun to practice problem solving but I don't think anyone should write such code for production.
Would love to see an O(n) solution.
How is this O(n^4)?
He just:
1. filters out the non-positive elements which is just O(n)
2. sorts the array, we are assuming this is O(1)
3. takes the first length + 1 elements which is just O(1)
4. compares the array with the sequence 1, 2, ..., length + 1 which is just O(n)
5. finds the first non-zero element which is just O(n)
So the total complexity of the whole operation is just O(n) + O(1) + O(1) + O(n) + O(n) = O(n)
He is doing the O(n) operations like filtering one by one.
@@metinersinarcan92 why are you assuming that sorting is O(1)?! That doesn't make any sense.
@@harleyspeedthrust4013 I am assuming it because LazieKat is assuming it in his/her comment.
APL-inspired iterator-y python:
lambda nums: list(dropwhile(lambda i: i[0] == i[1], enumerate(dropwhile(lambda x: x < 1, sorted(nums)), start=1)))[0][0]
I was having trouble posting links/bqn code on the other video, but my solution was essentially. Filter out negatives. Append 0 in case the list is empty. Iota to the max of the size of the list and the max value in the list(so that a list with one giant positive number isn't stupidly slow) + 1. Get a bit list by checking which elements of the original list are in the iota. invert. Get the indices. return the first index + 1.
Just try to defeat UA-cam putting some spaces in the link
Hard to be sure without actual code, but I think this is O(n) but not constant in extra space. The bit list would grow with n. If you'd still like to solve it, can you find a way to keep a ledger of the numbers found without creating a second list?
@@argus456 It is O(n) space, just like the solution in the video. I was only offering another solution with the same limitations.
APL/BQN are quite hard to use when dealing with less than O(n) space complexity. The issue is that depending on how each operation is written, they can use different amounts of space. Most operations will create intermediate results which makes many algorithms takes O(n) space instead of O(1) space. More a general limitation of array languages and the control they give you.
I know a correct O(1) space complexity solution for this problem. It just can not be program nicely in array languages if at all.
@@brendanhansknecht4650 fair enough. I'm usually not too interested about implementation details of certain languages, more about the algorithm itself. But I recognize that this video's comment section is not the right place to hold that opinion ;)
@@brendanhansknecht4650 I fully expect it can in Futhark.
First time seeing APL after this vid popping up in my recommended, and APL is a literally fucking alien language lol
python3: f = lambda x: min(filter(lambda i: not (i in x), range(1, max(x)+2)))
Are we assuming that the numbers in the set are unique? I think this falls apart otherwise
He did, and that is an unnecessary bug.
iterators are slow though...
Very nice problem...
Here is a golfed C (GCC) solution with O(n) time and O(1) auxiliary space for n
The core of the question is in the space complexity, the rest of the problem is relatively easy. Can you find a way to keep track of the integers found without creating a second array?
APL is fascinatingly close to hieroglyphs...
What is this greek or latin esoteric programming language jesus
More like wingdings. We're already using the Latin alphabet, and written Greek isn't particularly concise either.
Funny enough the only parts of the language I dislike are when you need or don't need (), how you can't always use / to filter, and how some functions either result in a single value or an array of a single value, otherwise it's pretty awesome once you learn it!
Something has happened while I was "away" from programming ... readability ... FFS. What the Sam Heck are APL and BQN? Math crap? Esperanto for nerds? Unicode overuse? Some Sadomasochistic urge to constantly be confused by your own code? So many questions. Is it elegant? Does it streamline anything for the average human?
Gawd ... I'm either dumb, too old, or both. $0.02
Haha. The funny part is that APL is super old.
That being said, if you take the time to learn the symbols, it is actually very readable and elegant.
One of Iverson's books was entitled “Notation as a Tool of Thought”. I meekly suppose this is the answer to your question.
Esperanto ne estas sama afero. Oni povas legi ĝin tre facile :)
omg you guys how can anyone read Chinese? you can't even sound out the letters because they're for whole words and there's so many of them
in the last video you mis-spelled MESMERASING as memorizing , so I was sure you were reading from text.
but this time, you were more natural.
6kDai(cydaDa
No wonder people use Python:
def first_missing(num_list):
lowest_missing = 1
for nums in num_list:
if (nums == lowest_missing):
lowest_missing = nums + 1
return(lowest_missing)
That's a really dumb argument; it's just a different style of a solution.
Here's a procedural C++ solution, mimicking yours:
int first_missing(std::vector num_list) {
int lowest_missing = 1;
for(auto num : num_list) {
if(num == lowest_missing) lowest_missing = num + 1;
}
return lowest_missing;
}
I'm pretty sure you can write a more functional Python solution mimicking the C++ one in the video as well, but I don't know Python too well so I can't write it
"0 iota swap omega member-of swap iota tally omega" is a very simple solution in apl idk why connor made his so complex. apl is great for simd and i think the approaching gpu driven world will highly favour apl thinkers and tinkerers
Fails for a reverse sorted list. BQN form: 1{1⊸+⍟(𝕨=𝕩)𝕩}´∨ somelist
The ∨ sorts descending, to match the ´ fold right. That sorting breaks the O(n) requirement by being O(n log n).
member-of solutions tend to break it by being repeated into O(n²).
A scatter broadcast operation, something numpy etc inherited from APL, can solve that (knowing that n≤5e5).
That was unnecessarily complicated; "1(=+⊢)´∨ list" does the same thing with no branching.
head
It would have been more interesting abiding the constraints.
{ ((1+≢⍵) @ (0∘=)) ⊃ 0~⍨ ({ ⍺×(2-≢⍵) }⌸ ⍵,⍨⍳≢⍵) }
I'm sure there's a better way but this is the first thing I could come up with. Basically I created a _hashmap-like_ that has 1 or 2 of every number that is potentially missing, then just finding the first/smallest one that's missing i.e. only shows once, and if the result is 0 because nothing is missing, replace that 0 with 1+input.length