| Exercise 69 Scores. || |
Modify the program to calculate scores for moves and return the move with the best score.
Notice how a score is associated with a particular move. Perhaps it should be part of the Move object. Doing this might let us eliminate tracking of the integer score from the "main" loop.
| Exercise 70 Comparing Moves. || |
Add the score to the Move class. Add some sort of max or comparison operator between moves.
Is it better this way? It seems so to me, so I'd tentatively accept this change. But my commitment to it is not unswerving; for now, it helps, but if it gets in the way in the future, away it goes.
The program calculates every possible move and response. This is feasible for tic-tac-toe, and perhaps it's okay if we were to convert it to Hasbro's Connect Four, but it is certainly not feasible for a game like chess or Go. Eventually we would have to develop a new strategy.
One way to handle this is to limit the depth to which we search. Suppose we establish a depth cutoff value; searches deeper than this value will simply return "don't know." We will pass an additional parameter representing the current depth.
| Exercise 71 Depth. || |
Use Add Parameter to add a depth parameter, and maintain its value properly. Once you have the depth parameter, add an early check that returns when things are too deep. What move will you return?
| Exercise 72 Caching. || |
We can think of performance tuning as refactoring for performance : It tries to keep the program performing the same job, only faster. If we think of the program as exploring the game tree of possible moves, we might see the same board via different paths. Could you cache the moves, so you could recognize boards you've already rated?
| Exercise 73 Balance. || |
Do we have the right balance in our objects? Are there any missing objects? Should Game calculate the score or should Move? Try shifting it around and see the consequences. Do some of these decisions make caching easier or harder?
| Exercise 74 New Features. || |
Add some new features, in test-first format; make sure to refactor along the way.
Score a win by the opponent at “100.
Extend to mxn tic-tac-toe.
Require that a move be at the lowest empty space in a column.
| Exercise 75 Min-Max . || |
Add another feature : Use the min-max algorithm, described in any Artificial Intelligence textbook . Instead of just saying "nonwins are all the same," you say "Choose my best move, assuming the opponent makes the move that's worst for me." The opponent uses the same rule. How is this reflected in the code? Is it a trick to use it?
There's an extension to the approach called alpha-beta pruning. It says that we can avoid searching parts of the tree by establishing cutoff values. Find an AI book and see if you can implement this approach. Is this a refactoring, a new development, or what?
| Exercise 76 Do-Over? || |
This has been an experiment in changing the structure of an application. There are other paths we could take. In particular, I feel like the balance between classes could go down a different path . Also, the first tests assumed 3 x 3 tic-tac-toe; it would be interesting to start 1 x 1 and work to m x n that way, letting 3 x 3 be a special case.
Would it be better to start over, or to work from the current base?