I thought about it for 15 mins, but couldn’t think of any mathematical tricks. I thought of lots of minor tricks, like comparing the number to the result and not adding any more multiplications if it’s over, things that would cut 10%-20% here and there, but nothing which fundamentally changes big O running time.
For reference, here’s my solution for part 2 in smalltalk. I just generated every possible permutation and tested it. Part 1 is similar, mainly I just used bit magic to avoid generating permutations.
(even if you haven’t used it, smalltalk is fairly readable, everything is left to right, except in parens)
day7p2: in
| input |
input := in lines collect: [ :l | (l splitOn: '\:|\s' asRegex) reject: #isEmpty thenCollect: #asInteger ].
^ (input select: [ :line |
(#(1 2 3) permutationsWithRepetitionsOfSize: line size - 2)
anySatisfy: [ :num | (self d7addmulcat: line ops: num) = (line at: 1)]
]) sum: #first.
d7addmulcat: nums ops: ops
| final |
final := nums at: 2.
ops withIndexDo: [ :op :i |
op = 1 ifTrue: [ final := final * (nums at: i + 2) ].
op = 2 ifTrue: [ final := final + (nums at: i + 2) ].
op = 3 ifTrue: [ final := (final asString, (nums at: i+2) asString) asInteger ]
].
^ final
Probably the easiest optimization (which admittedly I didn’t think of myself) is to work backwards: you can eliminate multiplication and concatenation early if you start with the answer and check terms from the right.
I don’t entirely see how, you still need every possible combination of the left side to see what they would become. Plus addition and multiplication are order independent anyway
The point is to prune away search space as early as possible. If the rightmost operand is 5, say, and the answer ends in a 7, then the rightmost operator cannot be anything other than plus. This is a deduction you can’t make going left to right. Remember that in this problem the usual order of operations does not apply.
I actually did this. It did not end up being any faster than the brute-force solution since it seems to only catch the easy cases
Perhaps your implementation continues the search longer than necessary? I got curious and tried it myself. Runtime went from 0.599s (brute force) to 0.017s.
I am not sure how much the formal complexity changes, but the pruning that occurs when working from right to left and bailing early when inverse multiplication or concatenation fails seriously cuts down on the number of combinations that you need to explore. With this pruning, the average fraction of the times (relative to the average 3^(n-1) / 2 times necessary for a naive brute force) that my code actually ended up checking whether the full inverted equation was equal to the target, was only 2.0% for equations that did not have solutions. Interestingly, the figure rose for equations that did have solutions to 6.6%.
How about the overall number of checks you did? String cat is heavy, but addition and multiplication are absurdly fast, probably faster than the branches needed for early escape.
Inverse concat isn’t too heavy if you implement with logs and such. Certainly still heavier than integer add/mul (or sub/div in my case), but arithmetic is usually faster than memory allocation. However, predicting the performance hit due to branching a priori is tricky on modern hardware, which implements sophisticated branch prediction and speculative execution. Furthermore, branching happens anyway between terms to select the right operation, though a misprediction there is likely less significant unless you are doing string manipulation.
“Overall number of checks” is a bit ambiguous; if taken to mean the number of times I check against the target for early escape, plus the final check on success, the figure is 15% relative to the average 3(n-1) / 2 checks required by brute force (n = number of terms in the equation, giving n-1 operators). That’s still almost a 7-fold decrease. If we instead look at the number of operator evaluations relative to the (n-1)/2 * 3(n-1) evaluations expected from an average brute force search (3(n-1) / 2 combinations with (n-1) operations conducted per combination), the figure is only 7.0%. In both cases, there is a significant amount of work not being done.
Interesting. I’m doing naive string cat; it would probably be way faster with just math.
Now that I think about it more carefully, you can effectively prune whole trees of options with checking, especially for cat.
I wonder, did you get to benchmark both approaches?
That pruning is indeed goal. As for benchmarking, I did not implement a brute force solution; I might try it if I finish one of the next few days quickly (lol fat chance). I did bench math vs string cat but did not record numbers. IIRC it made no measurable difference in Clojure, where input parsing with my crappy parser/combinator lib dominated, but math was something like a factor of three faster than string cat for Chez Scheme.