Food Courts Martial
Food Courts Martial
Part 1 was just a simple search. Part 2 looked like it just needed a trivial modification, but with the removal of the one-way tiles, the result I was getting was getting for the example was too large. I switched to a different method of determining the path length, but didn’t yet figure out what what I had been doing wrong. Since the search space was now significantly larger, my part 2 code took almost an hour to come up with the answer.
I rewrote part 2 to simplify the maze into a graph with a node for each intersection and for the start and goal tiles, with edge costs equal to the path length between each. This resulted in significantly faster iteration (17 seconds instead of 52 minutes), but didn’t actually reduce the search space. I’m assuming there’s some clever optimization that can be done here, but I’m not sure what it is.
The rewrite was still getting the wrong answer, though. I eventually figured out that it was including paths that didn’t actually reach the goal, as long as they didn’t revisit any nodes. I changed my recursive search function to return a large negative result at dead ends, which fixed the issue.
I sorted the bricks by their lower Z coordinate, then tried to move each of them downward, doing collision checks against all the others along the way. Once a level with collisions was found, I recorded each colliding brick as a supporter of the falling brick.
For part 1, I made another table of which other bricks each brick was supporting. Any bricks that weren’t the sole support for any other bricks were counted as safe to disintegrate.
For part 2, I sorted the bricks again after applying gravity. For each brick, I included it in a set of bricks that would fall if it were removed, then checked the others further down the list to see if they had any non-falling supporters. Those that didn’t would be added to the falling set.
Initially I was getting an answer for part 2 that was too high. I turned out that I was counting bricks that were on the ground as being unsupported, so some of them were getting included in the falling sets for their neighbors. Adding a z-level check fixed this.
Both of these have room for optimization, but non-debug builds run 0.5s and 1.0s respectively, so I didn’t feel the need to write an octree implementation or anything.
My part 2 solution assumes the input has an unimpeded shortest path from the center of each garden section to its corner, and to the center of its neighbor. The possible destinations will form a diamond pattern, with “radius” equal to the number of steps. I broke down the possible section permutations:
Sections that are completely within the interior of the diamond
Sections containing the points of the diamond
Depending on the number of steps, there may be sections adjacent to the point sections, that have two corners outside of the diamond
Edge sections. These will form a zig-zag pattern to cover the diamond boundary.
I determined how many of each of these should be present based on the number of steps, used my code from part 1 to get a destination count for each type, and then added them all up.
Another least common multiple problem. I kinda don’t like these, as it’s not practical to solve them purely with code that operates on arbitrary inputs.
Part 1 was pretty straightforward. For part 2 I made an ItemRange type that’s just one integer range for each attribute. I also made a split function that returns two ItemRange objects, one for the values that match the specified rule, and the others for the unmatched values. When iterating through the workflows, I start a new recursion branch to process any matching values, and continue stepping through with the unmatched values until none remain or they’re accepted/rejected.
Yeah, I read up on ear clipping for a small game dev project a while back, though I don’t remember if I actually ended up using it. So my solution is inspired by what I remember of that.
Yep, I figure it’s good exercise to make me think through the problems thoroughly.
Shoelace formula
This would have been really useful to know about. I’ve committed to a certain level of wheel-reinvention for this event unless I get really stuck, but I’m sure it’ll come up again in the future.
I am not making good time on these anymore.
For part 1, I walked through the dig plan instructions, keeping track of the highest and lowest x and y values reached, and used those to create a character grid, with an extra 1 tile border around it. Walked the instructions again to plot out the trench with #
, flood-filled the exterior with O
, and then counted the non-O
tiles. Sort of similar to the pipe maze problem.
This approach wouldn’t have been viable for part 2, due to the scale of the numbers involved. Instead I counted the number of left and right turns in the trench to determine whether it was being dug in a clockwise or counterclockwise direction, and assumed that there were no intersections. I then made a polygon that followed the outer edge of the trench. Wherever there was a run of 3 inward turns in a row, that meant there was a rectangular protrusion that could be chopped off of the main polygon. Repeatedly chopping these off eventually turns the polygon into a rectangle, so it’s just a matter of adding up the area of each. This worked great for the example input.
Unfortunately when I ran it on the actual input, I ran out of sets of inward turns early, leaving an “inside out” polygon. I thought this meant that the input must have intersections in it that I would have to untwist somehow. To keep this short, after a long debugging process I figured out that I was introducing intersections during the chopping process. The chopped regions can have additional trench inside of them, which results in those parts ending up outside of the reduced polygon. I solved this by chopping off the narrowest protrusions first.
The rest of the code:
Another tough one. Judging by the relative lack of comments here, I wasn’t the only one that had trouble. For me this one was less frustrating and more interesting than day 12, though.
I solved part 1 by doing a recursive depth-first search, biasing towards a zigzag path directly to the goal in order to establish a baseline path cost. Path branches that got more expensive than the current best path terminated early. I also stored direction, speed, and heat loss data for each tile entered. Any path branch that entered a tile in the same direction and at the same (or greater) speed as a previous path was terminated, unless it had a lower temperature loss.
This ran pretty slowly, taking around an hour to finish. I took a break and just let it run. Once it completed, it had gotten pretty late, so I did a quick naive modification for part 2 to account for the new movement restrictions, and let that run overnight. The next day it was still running, so I spent some time trying to think of a way to speed it up. Didn’t really get anywhere on my own, so I started reading up on A* to refresh my memory on how it worked.
The solution that I arrived at for the rewrite was to use Dijkstra’s algorithm to pre-compute a map of what the minimum possible costs would be from each tile to the goal, if adjacent tiles could be moved to without restriction. I then used that as the heuristic for A*. While I was writing this, the original part 2 program did finish and gave the correct answer. Since I was already this far in though, I figured I’d finish the rewrite anyway.
The new program got the wrong answer, but did so very quickly. It turned out that I had a bug in my Dijkstra map. I was sorting the node queue by the currently computed cost to move from that node to the goal, when it instead should have been sorted by that plus the cost to enter that node from a neighbor. Since the node at the head of the queue is removed and marked as finalized on each iteration, some nodes were being finalized before their actual minimum costs were found.
When using the A* algorithm, you usually want your heuristic cost estimate to underestimate the actual cost to reach the goal from a given node. If it overestimates instead, the algorithm will overlook routes that are potentially more optimal than the computed route. This can be useful if you want to find a “good enough” route quickly, but in this case we need the actual best path.
I’m caught up!
This one was pretty straighforward. Iterate through the beam path, recursively creating new beams when you hit splitters. The only gotcha is that you need a way to detect infinite loops that can be created by splitters. I opted to record energized non-special tiles as -
or |
, depending on which way the beam was traveling, and then abort any path that retreads those tiles in the same way. I meant to also use +
for where the beams cross, but I forgot and it turned out not to be necessary.
Part 2 was pretty trivial once the code for part 1 was written.
Almost caught up. Not much to say about this one. Part 1 was a freebie. Part 2 had a convoluted description, but was still pretty easy.
Getting caught up slowly after spending way too long on day 12. I’ll be busy this weekend though, so I’ll probably fall further behind.
Part 2 looked daunting at first, as I knew brute-forcing 1 billion iterations wouldn’t be practical. I did some premature optimization anyway, pre-calculating north/south and east/west runs in which the round rocks would be able to travel.
At first I figured maybe the rocks would eventually reach a stable configuration, so I added a check to detect if the current iteration matches the previous one. It never triggered, so I dumped some of the grid states and it became obvious that there was a cycle occurring. I probably should have guessed this in advance. The spin cycle is effectively a pseudorandom number generator, and all PRNGs eventually cycle. Good PRNGs have a very long cycle length, but this one isn’t very good.
I added a hash table, mapping the state of each iteration to the next one. Once a value is added that already exists in the table as a key, there’s a complete cycle. At that point it’s just a matter of walking the cycle to determine it’s length, and calculating from there.
This one was a nice change of pace after the disaster of day 12. For part 1 I kept a list of valid column indexes, and checked those columns in each row for reflections, eliminating columns from the list as I went. To find vertical reflections, I just transposed the grid first.
Part 2 looked daunting at first, but I just needed to add a smudges
counter to each column candidate, eliminating them when their counter reaches 2. For scoring, just count the 1-smudge columns.
And finally:
Fellas, is it woke for YouTube to funnel viewers towards pro-fascist videos?