hey adam, great video - love the energy and easy-to-follow explanations! :) just a quick question, how do we initialise the frontier for the target node (so basically, step 0 lol)? how does the algorithm even know where the target goal is before searching?
Its not really the the algorithm does/does not know the target, but rather about the environment's observability. You'd want the target to be observable because for bidirectional you'd need a method for taking steps backwards. So in a sense, you'd establish what the goal condition is that you're looking for. A good example would be think of a chess game - there's a initial starting point and a known ending point (checkmate). While the starting point is working out which pawns to move in the beginning, working backwards from a specific (or multiple) checkmates is looking at moves in reverse. What moves directly caused the checkmate? Then what moves caused that move? Etc.
@@AMGaweda great explanations. Nobody would have difficulties understanding the idea of bidirectional search especially looking at that graph. But I could almost say that everyone is wired out by the fact why if we know goal state and we still need to search for it. And thats the question needed a good explanation. Not really directional search per se.
@liu baoying Knowing what the goal state / condition is doesn't necessary mean the agent is meeting the goal condition. In a completely observed environment we're able to map out the actions to reach a given state. Bidirectional search then helps us find a path that leads to the goal state in roughly half the time. To use an example, suppose I am currently in my bedroom, wanting a snack. My goal condition is being in my kitchen, eating a snack. Bidirectional search would map out the from the starting state I would need to leave my bedroom. From the goal state, I would need to have made the snack in the kitchen. While the starting state's search plans ahead, the goal state's backtracking steps that led to it. Once we have a common step (entering the kitchen for example), we have a complete path from start to goal for the agent to complete.
This is so beautiful. I just found out about bidirectional search today
hey adam, great video - love the energy and easy-to-follow explanations! :)
just a quick question, how do we initialise the frontier for the target node (so basically, step 0 lol)? how does the algorithm even know where the target goal is before searching?
Its not really the the algorithm does/does not know the target, but rather about the environment's observability. You'd want the target to be observable because for bidirectional you'd need a method for taking steps backwards. So in a sense, you'd establish what the goal condition is that you're looking for.
A good example would be think of a chess game - there's a initial starting point and a known ending point (checkmate). While the starting point is working out which pawns to move in the beginning, working backwards from a specific (or multiple) checkmates is looking at moves in reverse. What moves directly caused the checkmate? Then what moves caused that move? Etc.
@@AMGaweda great explanations. Nobody would have difficulties understanding the idea of bidirectional search especially looking at that graph. But I could almost say that everyone is wired out by the fact why if we know goal state and we still need to search for it. And thats the question needed a good explanation. Not really directional search per se.
@liu baoying Knowing what the goal state / condition is doesn't necessary mean the agent is meeting the goal condition. In a completely observed environment we're able to map out the actions to reach a given state. Bidirectional search then helps us find a path that leads to the goal state in roughly half the time.
To use an example, suppose I am currently in my bedroom, wanting a snack. My goal condition is being in my kitchen, eating a snack. Bidirectional search would map out the from the starting state I would need to leave my bedroom. From the goal state, I would need to have made the snack in the kitchen. While the starting state's search plans ahead, the goal state's backtracking steps that led to it. Once we have a common step (entering the kitchen for example), we have a complete path from start to goal for the agent to complete.
Real nice
Explain also the Bidirectional Heuristic Search - BAE*