To define a problem for searching, the following components need to be described:
1. Initial state
2. Possible actions (i.e. possible ‘paths’ from the present state)
3. Goal test (i.e. how to identify whether a particular state is a goal state)
4. Path cost
The possible actions from a state are usually given by a successor function. For s state x, Successor(x) would give a set of <p,y> pairs where p is an action from state x and yis the resulting state that can be reached by performing p.
BFS is poor with respect to both time and space requirements. Optimal when all step costs are equal.
Uniform cost search
Same as BFS, except that instead of the shallowest node, the one with the least path cost will be expanded next. Thus the algorithm is optimal when the step costs are not equal.
Modest memory requirements. Not optimal.
Backtracking search same
as DFS, but saves more memory by expanding only one successor instead of one. e.g. If a, b, c, d are all possible successor nodes, DFS will expand all of them and then again follow a, and come back to b once a has fully been traversed. Backtracking search will not expand all four, but will only expand a, and will remember which to expand once a has been fully traversed (i.e. node b).
Depth limited search
A depth limit is introduced to DFS to prevent it from getting ‘lost’ in a very long or infinite subtree.
Iterative deepening depth first search
Depth limited search is performed repeatedly with increasing depth limits. While this may seem excessively wasteful intuitively, it is not too wasteful, since the nodes that get expanded repeatedly are the ones at near the root of the tree, and there are less nodes near the roots than near the leaves.
In general, this is the preferred search when there is a large search space and the depth of the solution is unknown.
Two searches run simultaneously expanding both from the initial state and the goal state. Depending on the problem, searching backwards from the goal states is not always easy.
Avoiding repeated states
If an algorithm does not detect repeated states, it may end up in an infinite loop. The general strategy is to maintain a list of already expanded states (called the closed list) and check each newly expanded states against the list.
Searching with partial information
If the environment is not fully observable or deterministic, then the following types of problems occur:
1. Sensorless problems
If the agent has no sensors, then the agent cannot know it’s current state, and hence would have to make many repeated action paths to ensure that the goal state is reached regardless of it’s initial state.
2. Contingency problems
This is when the environment is partially observable or when actions are uncertain. Then after each action the agent needs to verify what effects that action has caused. Rather than planning for every possible contingency after an action, it is usually better to start acting and see which contingencies do arise.
This is called interleaving of search and execution.
A problem is called adversarial if the uncertainty is caused by the actions of another agent.
3. Exploration problems
This can be considered an extreme case of contingency problems: when the states and actions of the environment is unknown, the agent must act to discover them.