Consider the following three variants of minimax: the simple version, alpha-beta search, and depth-limited search, and consider the games of tic-tac-toe and chess. For each combination of minimax variant and game, answer the following question: can that minimax variant possibly never terminate, in computing the best next move? Justify your answer.

For chess, assume that the rules do not impose any limit on the total number of moves in a game.

Respuesta :

Answer:

Please find answer for the question 1 below:

Minimax algorithm: Minimax algorithm is an algorithm (recursive in nature) that is performed to calculate an optimal move for a player with an assumption that other player is also playing with same knowledge as the first player. This algorithm can be applied in games like chess, tic-tac-toe etc.

Case 1: Minimax simple version: For simple games like Tic-Tac-Toe, state space is small enough to exhaustively search for winning states. In such a case, the Max player can always force a win regardless of Min's first move. Min can win only when Max plays foolishly without any common sense.

While for complex games like Chess, state space to search is very large and it is computationally very expensive to search entire tree. If we assume that the depth of state space tree is m and there are b legal moves to search for winning states at each point, then the complexity of this algorithm will be O(bm). Hence, depending on the context, it is possible that algorithm doesn't terminate in looking for best move.

Case 2: Alpha-Beta pruning: To improve the performance of Minimax algorithm, the idea of pruning is applied. The basic idea behind Alpha-Beta pruning is that it is possible to calculate the correct minimax decision without examining each node in the state space tree. Hence, this type of procedure when applied to a standard minimax tree, it returns the same move as standard minimax, but prunes the branches not leading to optimal (winning) states.

It can be applied to trees of any depth (small like Tic-Tac-Toe as well as big like Chess) and it is often possible prune entire subtrees rather than just leaves. In this method, Alpha represents best choice found so far along a path to Max, while Beta represents best choice found so far along a path to Min. The procedure updates the values of Alpha and Beta as it goes along and prunes rest of the branches at a node (termination) as soon as the value of current node is found to be worse the current Alpha or Beta for Max or Min. It can be shown that this procedure will always look for absolute best move in any given position first. And the effectiveness of this algorithm will depend on the depth of the tree.

Case 3: Depth-limited search: In this method, state space is search to a pre-defined number of levels. It basically applies a n-move look ahead strategy where n is number of levels explored so far. It means to look for winning state at a larger depth than the current level in order to ensure winning advantage. The evaluation of advantageous or non-advantageous state is based on some heuristic. When this heuristic is applied with a limited look-ahead, then depth of look ahead may not be able to deduce that a promising path may lead to a bad situation later. And if such a path is followed further it is possible to loose the entire game. For this reason, usually larger depths deeper from good looking states are explored. But this may lead to a situation when procedure looking for a best next move never terminates. But search has to stop somewhere as per limited depth strategy. And an after effect of this strategy is that search will not be able to find out good looking states (ultimately winning states) beyond the permitted depth. But this is a trade-off and ensures that algorithm is terminated.

Explanation:

ACCESS MORE
EDU ACCESS