I think we can use the Load as a hash or identifier for the grid at any point after a spin cycle. so we can track the load (an int) instead of an entire grid.
for part 1, I did it in another way without the need to flip or change the order of the elements of the original matrix. For each rock I looked at the elements above it, if I encounter a # i set the value of that rock to len(matrix) - i_cord of row_boulder +1. If you end up at the top of the matrix, i set its value to len(matrix). If I encounter another rock I set the value of the rock i am evaluating to encountered rock value - 1. Then I just sum all up. It was quite easy to code, but them I got stuck with this approach for part two. Don't know if there is some sort of way to make it work also for part 2.
I have some simple functions that examine the fields above, left, below, and right and move the rocks if they're free. For part one, this posed no problem, but for part two, it became quite performance-intensive. I discovered that running through 1000 cycles yielded the same result as running through 1000000000 cycles. The only connection between 1000 and 1000000000 is that 1000^3 equals 1000000000. Can anyone explain if this was just luck or if there's an actual reason?
It means that 1000 happens to be your (1B - offset) % (cycle length - offset) + offset value. It could also be a coincidence that the total load value is the same even though the grids are different.
@@hyper-neutrino you are right. It was just a coincidence. I was a bit confused because they're both powers of ten, and the test case also returns the correct value at 1000 cycles. The correct approach would probably be to cycle until you find a readapting loop in the cycles and skip the remaining iterations. I will adjust my solution. Great videos by the way, they helped me on a few tricky problems although I use c++ for this years aoc
Using tuples of 100 strings as cache keys, array entries and search values feels so wrong but so right, i know, but maybe show the audience that data hashing is used in these cases, to not blow up the RAM? 😅
Wow, this is insanely clever, yet results in such simple, short and clean code!
Unbelievable :o great work
Nicely done. I did the same approach, first manually by printing it out to the terminal and then automating it. Your solution is really tidy!
This was super clean 😁
I think we can use the Load as a hash or identifier for the grid at any point after a spin cycle. so we can track the load (an int) instead of an entire grid.
You could theoretically have two seperate grids with the same load probably which would break this approach no?
@@xavier29771 I just saw this lol. Yep you are 100% correct.
So its an abacus, where some of the beads are stuck to the rail.
thank you hyper neutrino
for part 1, I did it in another way without the need to flip or change the order of the elements of the original matrix. For each rock I looked at the elements above it, if I encounter a # i set the value of that rock to len(matrix) - i_cord of row_boulder +1. If you end up at the top of the matrix, i set its value to len(matrix). If I encounter another rock I set the value of the rock i am evaluating to encountered rock value - 1. Then I just sum all up. It was quite easy to code, but them I got stuck with this approach for part two. Don't know if there is some sort of way to make it work also for part 2.
I believe the best way to do p2 with this approach is also to apply a 90deg rotation 4 times - lmk if there's a better approach.
@@mathcat4 I tried that and it took forever
@@lerbynsame, my code was probably very slow, but i wonder if you can code that up in a naive way and end up with anything fast :0
Really well done. And thank you for the explanations. Could I ask you to explain how open(0).read() opens the test.txt file please.
Ah, that's done by my aoc utility script, which just runs python3 solution.py < test.txt (or in.txt). You can find my script on my github repo
@@hyper-neutrino Thank you
this is cool! when i detected a cycle i just incorrectly assumed the cycle length is the iter count, instead of accounting for when cycle begins
can anyone tell in which line exactly rotation is happening in part 2...cuz all i see is reversing the rows (in 3rd line of for loop) ??
The zip(*grid) is a diagonal flip and the row reversal is a horizontal flip. Combining these results in a rotation.
silly me😔...thanx:)
I have some simple functions that examine the fields above, left, below, and right and move the rocks if they're free. For part one, this posed no problem, but for part two, it became quite performance-intensive. I discovered that running through 1000 cycles yielded the same result as running through 1000000000 cycles. The only connection between 1000 and 1000000000 is that 1000^3 equals 1000000000. Can anyone explain if this was just luck or if there's an actual reason?
It means that 1000 happens to be your (1B - offset) % (cycle length - offset) + offset value. It could also be a coincidence that the total load value is the same even though the grids are different.
@@hyper-neutrino you are right. It was just a coincidence. I was a bit confused because they're both powers of ten, and the test case also returns the correct value at 1000 cycles. The correct approach would probably be to cycle until you find a readapting loop in the cycles and skip the remaining iterations. I will adjust my solution. Great videos by the way, they helped me on a few tricky problems although I use c++ for this years aoc
Using tuples of 100 strings as cache keys, array entries and search values feels so wrong but so right, i know, but maybe show the audience that data hashing is used in these cases, to not blow up the RAM? 😅
I could make a separate video about hashtables/hashsets sometime! (And other data structures)
You could just do table instead of set+array like table[string, int], or in Python it's dict really