Improve solver performance #60
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
The solver is currently rather slow, which is a real issue when using the "ensure solvability" feature. Given a field with w=30,h=16,m=99,s=b, clicking in the top-right corner causes a 2+ second delay before a map is shown. Instead, this should take no more than, say, 250ms. Meanwhile, increasing the number of mines to 170 freezes the field entirely; I've never seen it find a map before I gave up. Increasing the field to 99x99 with 2021 mines is unthinkable right now, but shouldn't be. (For comparison, Tatham's game generates a solvable 99x99;2021 game well within a second.)
Related to #58, since that issue should be able to increase the performance of the field drastically. Also, I should consider looking at Tatham's solver to see how it works.
Additionally, solving #14 may improve the performance of finding a solvable map by increasing the amount of maps that can be solved.
It is important to note that the most expensive operation in the finding of a good field is the
solveBinary
function of the solver. In particular, thesolve
function takes approximately 75% of the time, and of that therref
function takes 85% of the time. Improving that performance (or avoiding invocation of that function) is the most important; making all those other things such as event structures work better is really not as important.This issue will be closed once a 30x16 170-mine field can be opened consistently within 1000ms.
This paper is interesting in its discussion of different approachers for solvers. However, I'm not sure if improving solver performance is necessary as much as reducing the number of attempts necessary; this could be done by prematurely filtering out unusable fields before trying to solve them, without requiring matrix solving.
Alternatively/additionally, I could look into using webassembly to speed up the solver.
See also this paper.
Since it's the matrix operations that are slow, maybe I could find a way of rewriting the matrix solver so that it doesn't actually change the matrix, but each time calculates it dynamically by keeping track of what changes have previously occurred.
As a comment on the above, I should see if my matrices are sparse (probably yes).
I could test what happens if I use an rref algorithm specialised for sparse matrices, and/or use a library that implements sparse matrices. I could use something such as vectorious or vectorious-plus, and rewrite my rref code to work with that. (Or keep the old code to compare against later in tests.)
If that doesn't work, I should see if there are special algorithms for sparse matrices. This lecture seems interesting.
I tried out Vectorious. Unfortunately, it only slows down the code; now it can't even solve an expert size field. I should seriously consider contacting Tatham to ask him how he did it.
I should look into filtering out unsolvable fields, as I noted in a comment above, and as suggested by Jorrit. This means the solver doesn't get faster, but the
canSolve
function can say "no" quicker.Filtering out is definitely a good idea, but I'm not sure I'll be able to properly define and identify these cases efficiently. It's quite a tough job.
That said, an alternative method may be to, when an unsolvable case arises, move one or more mines into the currently unplayed part, and see if it then becomes solvable.