Here, we provide a showcase of the work we did over the last semester! We show the best parts of our projects for computing paths and simulating crowds in SteerLite and the best parts of our crowd simulators, interactive experiences, and narratives in Unity!
Virtual Reality with Steerlite and Unity
Friday, December 18, 2015
Interactive Narrative
Project Overview:
In this project, we combine all of the knowledge we've developed so far to create three stories in Unity. The first story doesn't take any input from the user and uses behavior trees (the KADAPT library) to author interactions. The second story allows a player to control a hero and interact with and change the story. The third story gives the user total control over the story except for some required key events that must occur for the story to progress. We now discuss the story in depth.
Github: https://github.com/CG-F15-23-Rutgers/UnityProjects
Documentation: https://dl.dropboxusercontent.com/u/48369447/KnightGame/Report.docx
B3 Source Code Download: https://dl.dropboxusercontent.com/u/48369447/KnightGame/Code.zip
Story Overview: An evil Goblin has taken over the kingdom! A valiant knight must battle the goblin and his henchmen to rescue the kingdom! There are four sections to the map:
1.) The graveyard
2.) The mountain
3.) The castle
4.) The plains
5.) The forest
The player starts just outside the graveyard. Here he must face the vanguard. The vanguard consists of two of the goblin’s evil knight henchmen. He can defeat them by slashing them with his sword, scaring them by shouting at them (they flee to the forest), or throwing rocks at them. Next, he faces the mid-guard. He can choose to slay them with his sword or scare them away. The hero then can either battle the royal guard or sneak past them to battle the Goblin King. The Goblin King uses the ancient martial art of Goblin fight dancing to battle the hero. The hero defeats the Goblin King by casting a spell. If the Goblin King is defeated, the royal guards flee. The rearguard roams and patrols the plains and the skeletons/ghosts of the kingdom’s former knights roam the graveyard.
Characters:
1.) Hero: Large, distinct knight trying to rescue the kingdom from the Goblin King.
2.) Goblin King: An evil goblin that has taken over the land, castle, and country’s gold supply; resides in the castle at the top of the mountain
3.) Vanguard: Henchmen of the Goblin King. First two evil knights the Hero must face.
4.) Skeletons: Wandering ghosts of the kingdom’s former knights; roam in the graveyard
5.) Mid-Guard: Henchmen of the Goblin King; two knights stationed in the graveyard
6.) Royal Guard: Henchmen of the Goblin King; four knights stationed on the mountain
7.) Rearguard: Henchmen of the Goblin King; four knights roaming the plains
Objects:
1.) Trees: Static objects
2.) Rocks: Able to be thrown at enemies by the Hero
3.) Castle: Static object; guarded by Goblin King
4.) Gold: Static objects; surrounding and guarded by Goblin King
5.) Sword: Used by hero to slash enemies
6.) Shield: Used by hero to defend against enemy attacks
Events:
1.) Approach and Wait: An agent moves to a specified waypoint or other agent
2.) Body Animation: An agent makes a specific body animation
a. We use the “throwing” animation to cast magic spells.
b. We use the duck action to avoid attacks and pick up rocks.
c. We use the breakdance action for the Goblin’s martial arts.
d. We use the fight action for having non-Hero agents fight.
e. We use the sword swinging action for the Hero’s fight action.
f. We use the dying action when an agent is killed.
3.) Face Animation: An agent makes a specific facial expression
a. We use the following facial expressions:
i. Fire breathing is used to scare other agents.
ii. Headshaking is used to express disappointment.
4.) Hand Animation: An agent makes a specific gesture.
a. We use the following gestures:
i. Hands up
ii. Being cocky
iii. Thinking
iv. Waving
v. Pointing
vi. Clapping
5.) Look at: An agent looks at another agent or waypoint
6.) Face off: Two agents face each other
7.) Speak pair: Two agents have a conversation involving the “cocky” and “thinking” actions.
8.) Get ball and throw ball: An agent picks up a spherical object and throws it to a specified location.
9.) Fight: One agent takes the fighting stance, the other agent swings a sword, and the first agent dies.
10.) Fight Martial Arts: Fight where one agent uses martial arts (breakdancing) and the other takes the fighting stance
11.) Fight2: A choreographed battle between the hero and Goblin King. Where the hero attacks with a sword, the Goblin dodges, counteracts with martial arts, the hero dodges, casts a spell, and the Goblin dies.
12.) Flee: An agent throws his hands up, looks at the Hero, shakes his head, waves, and flees to a specified waypoint.
13.) Scare: An agent makes a fire breather expression and the other agent flees.
14.) Scare Monster: An agent scares a monster, but the monster scares back causing the agent to flee.
15.) Applaud: An agent claps for another agent.
16.) Skeleton Wander: Skeletons wander between randomly selected waypoints.
17.) Knight Patrol: Knights patrol between randomly selected waypoints
18.) Throw rock: An agents picks up and throws a rock at another agent who then dies
PART 1: No User Interaction
Video: https://www.youtube.com/watch?v=XRTXX9IO_u4
Playable Demo: https://dl.dropboxusercontent.com/u/48369447/KnightGame/Demos/Part1/Part1_Demo/Part1_Demo.html
Controls:
• Press the ‘C’ key to switch views from one focused on the Hero to one showing the entire scene.
Description:
1.) One of the two vanguards is randomly selected.
2.) One of the following occurs randomly:
a. The selected knight approaches the hero and is slain.
b. The hero approaches the selected knight and slays him.
c. The hero approaches the selected knight, scares him, and makes him flee.
d. The hero grabs a rock and throws it at the selected knight, killing him.
3.) The hero then picks one of the same actions as before for the other knight.
4.) The hero moves to the graveyard.
5.) The two grave yard guards approach one another and have a random conversation.
6.) One of the two graveyard guards is randomly selected.
7.) The hero can perform one of the following actions randomly:
a. The hero approaches the selected knight and slays him.
b. The hero approaches the selected knight, scares him, and makes him flee.
8.) The hero then performs one of the same actions randomly to the other guard.
9.) The hero sneaks past the royal guards on the mountain.
10.) The hero engages in battle with the Goblin King and slays him with a spell.
11.) The royal guards flee
12.) In parallel to steps 1-11 (and each other):
a. Skeleton ghosts wander in the graveyard.
b. Evil knights patrol the plains.
Behavior Tree:
PART 2: User Controls Hero
Video: https://www.youtube.com/watch?v=kNq45E1Zg0I
Playable Demo: https://dl.dropboxusercontent.com/u/48369447/KnightGame/Demos/Part2/Part2_Demo/Part2_Demo.html
Controls:
• Press the ‘C’ key to switch views from one focused on the Hero to one showing the entire scene.
• The user dictates the actions the agent takes. HE/SHE MUST FOLLOW THIS PROCEDURE:
o Select an antagonist:
• Press ‘1’: Knight 1 (Vanguard 1) is antagonist
• Press ‘2’: Knight 2 (Vanguard 2) is antagonist
• Press ‘3’: Knight 3 (Mid-guard/Graveyard Guard 1) is antagonist
• Press ‘4’: Knight 4 (Mid-guard/Graveyard Guard 2) is antagonist
• Press ‘5’: Knight 5 (Royal Guard/Mountain Guard 1) is antagonist
• Press ‘6’: Knight 6 (Royal Guard/Mountain Guard 2) is antagonist
• Press ‘7’: Knight 7 (Royal Guard/Mountain Guard 3) is antagonist
• Press ‘8’: Knight 8 (Royal Guard/Mountain Guard 4) is antagonist
• Press ‘G’: Goblin King is the antagonist
• Press ‘S’: One of the skeletons is the antagonist
o Select an action:
• Press ‘F’: Fight
• Press ‘B’: Scare
o Hit the return key to perform the action
o Repeat until protagonist is dead or Goblin King is defeated.
Description:
General Requirement: protagonist and antagonist must be different and alive/not fled.
The user is free to select any set of actions, but must adhere to the basic story line:
1.) All vanguards must be defeated first, but the order in which they are defeated and how they are defeated do not matter.
2.) Once the vanguards are defeated, the hero can battle/scare the mid-guards and skeletons in any order, but not the royal guards.
3.) Scaring the skeleton will result in the skeleton scaring the player back resulting in the hero fleeing and the Goblin winning. The skeleton can be fought and laid to rest.
4.) Defeating the skeleton is optional.
5.) Once all of the mid-guards are defeated the player can battle the royal guards or go directly to the Goblin King.
6.) Defeating the royal guards is optional.
7.) Scaring the Goblin King will result in him scaring back causing the Hero to flee and lose the game.
8.) Once the Goblin King is defeated the player has won the game (and is free to go back and defeat any remaining antagonists).
Behavior Tree (Simplified, See Code for Full):
PART 3: User Can Control Any Character
Video: https://www.youtube.com/watch?v=cmeSp1qqIjc
Playable Demo: https://dl.dropboxusercontent.com/u/48369447/KnightGame/Demos/Part3/Part3_Demo/Part3_Demo.html
Controls:
• Press the ‘C’ key to switch views from one focused on the Hero to one showing the entire scene.
• The user dictates the actions the agent takes. HE/SHE MUST FOLLOW THIS PROCEDURE:
o Select a protagonist:
• Press ‘H’: Hero is antagonist
• Press ‘1’: Knight 1 (Vanguard 1) is protagonist
• Press ‘2’: Knight 2 (Vanguard 2) is protagonist
• Press ‘3’: Knight 3 (Mid-guard/Graveyard Guard 1) is protagonist
• Press ‘4’: Knight 4 (Mid-guard/Graveyard Guard 2) is protagonist
• Press ‘5’: Knight 5 (Royal Guard/Mountain Guard 1) is protagonist
• Press ‘6’: Knight 6 (Royal Guard/Mountain Guard 2) is protagonist
• Press ‘7’: Knight 7 (Royal Guard/Mountain Guard 3) is protagonist
• Press ‘8’: Knight 8 (Royal Guard/Mountain Guard 4) is protagonist
• Press ‘G’: Goblin King is the protagonist
• Press ‘S’: One of the skeletons is the protagonist
o Select an antagonist:
• Hold the ‘Shift’ Key and do one of the following:
• Press ‘H’: Hero is antagonist
• Press ‘1’: Knight 1 (Vanguard 1) is antagonist
• Press ‘2’: Knight 2 (Vanguard 2) is antagonist
• Press ‘3’: Knight 3 (Mid-guard/Graveyard Guard 1) is antagonist
• Press ‘4’: Knight 4 (Mid-guard/Graveyard Guard 2) is antagonist
• Press ‘5’: Knight 5 (Royal Guard/Mountain Guard 1) is antagonist
• Press ‘6’: Knight 6 (Royal Guard/Mountain Guard 2) is antagonist
• Press ‘7’: Knight 7 (Royal Guard/Mountain Guard 3) is antagonist
• Press ‘8’: Knight 8 (Royal Guard/Mountain Guard 4) is antagonist
• Press ‘G’: Goblin King is the antagonist
• Press ‘S’: One of the skeletons is the antagonist
o Select an action:
• Press ‘F’: Fight
• Press ‘B’: Scare
o Hit the return key to perform the action
o Selecting a protagonist causes the camera to focus on the newly selected protagonist.
o Repeat until protagonist is dead or Goblin King is defeated.
Description:
The user is free to select any set of actions, but must adhere to the basic story line for each character.
General Requirement: protagonist and antagonist must be different and alive/not fled.
For the hero as protagonist (same as previous):
1.) All vanguards must be defeated first, but the order in which they are defeated and how they are defeated do not matter.
2.) Once the vanguards are defeated, the hero can battle/scare the mid-guards and skeletons in any order, but not the royal guards.
3.) Scaring the skeleton will result in the skeleton scaring the player back resulting in the hero fleeing and the Goblin winning. The skeleton can be fought and laid to rest.
4.) Defeating the skeleton is optional.
5.) Once all of the mid-guards are defeated the player can battle the royal guards or go directly to the Goblin King.
6.) Defeating the royal guards is optional.
7.) Scaring the Goblin King will result in him scaring back causing the Hero to flee and lose the game.
8.) Once the Goblin King is defeated (by fighting) the player has won the game (and is free to go back and defeat any remaining antagonists).
For an evil knight as protagonist:
1.) Evil knights can only interact with other knights of the same category:
a. Vanguards w/ Vanguards
b. Mid-guards w/ Mid-guards
c. Royal guards w/ Royal guards
2.) When evil knights interact with other evil knights (regardless of the user selected action), they start a conversation.
3.) Evil knights can interact with the hero only if he has defeated all evil knights of lesser rank (vanguards = lowest, mid-guards = medium, royal guards = highest)
a. “They will not leave their post unless required”
4.) If evil knights fight the hero, they will be killed.
5.) If evil knights try to scare the hero, he will scare back, and they will flee.
6.) Evil knights will not interact with skeletons because they know better.
7.) Evil knights will not interact with the Goblin King because he is their ruler.
For the skeleton as protagonist:
1.) Skeletons can only scare other agents. They cannot fight.
2.) If the skeleton scares the hero or an evil knight, the hero or evil knight will flee.
3.) If the skeleton scares the Goblin King, the Goblin King scares back, and the ghost flees. Goblin Kings are not afraid of anyone!
For the Goblin King as protagonist:
1.) When the Goblin King interacts with an evil knight (regardless of the user selected action), the evil knight will applaud the Goblin King.
2.) The Goblin King always kills when he fights as the protagonist, even the hero!
3.) The Goblin King uses martial arts when he fights.
4.) The Goblin King scares everybody causing them to flee, nobody will scare back.
Behavior Tree (Simplified, See Code for Full):
Monday, December 7, 2015
Advanced Crowd Simulation and Planning
Part 1
We show the results of combining global and local planning and collision avoidance algorithms. In particular, we combine A* as a global path planner with the social forces model for avoiding collisions when agents get too close without deviating too much from the path.
Playlist URL: https://www.youtube.com/playlist?list=PLvNi45_IRtsx033FJz2fIDGHSYws4x6nV
Course 1: Narrow Squeeze (Wall-Squeeze)
URL: https://www.youtube.com/watch?v=OBk-daUFsXw
Description: We demonstrate agents passing each other in a narrow passage squeezing through an even narrow obstacle.
Course 2: Double Squeeze (Double-Squeeze)
URL: https://www.youtube.com/watch?v=tvl1Hs6JapU
Description: We demonstrate agents rotating around each other to pass a narrow passage.
Course 3: Doorway (Doorway-Two-Way)
URL: https://www.youtube.com/watch?v=nLLCkrkLouM
Description: We demonstrate agents rotating around each other to pass a narrow passage.
Course 4: Bottleneck (Bottleneck-Squeeze)
URL: https://www.youtube.com/watch?v=owM22xCNgrA
Description: Here we demonstrate a large number of agents passing through a narrow space.
Course 5: Maze (Maze)
URL: https://www.youtube.com/watch?v=gmwFyTqkPoc
Description: We demonstrate an agent solving a maze.
Course 6: Crowd Crossing (Crowd_Crossing)
URL: https://www.youtube.com/watch?v=ZJsqOTvudNo
Description: We demonstrate a large object traverse a large crowd of small agents flowing in the perpendicular direction.
Course 7: Roundabout (Hallway-Four-Way-Rounded-Roundabout)
URL: https://www.youtube.com/watch?v=bqqW0Qhwzi0
Description: We see agents participating in a round-about in order to cross an intersection.
Course 8: Hallway (Hallway-Two-Way)
URL: https://www.youtube.com/watch?v=9VX2aUEbv9w
Description: We demonstrate agents traversing a hallway in two directions.
Course 9: Office Evacuation (Office-Complex)
URL: https://www.youtube.com/watch?v=AabTuTdZcpU
Description: We demonstrate agents evacuating a complex office layout.
Course 10: Exit Plane (Plane_Egress)
URL: https://www.youtube.com/watch?v=2tob9-DuJa0
Description: We demonstrate having agents exiting a plane.
Course 11: Enter Plane (Plane_Ingress)
URL: https://www.youtube.com/watch?v=LhP6hqM6Mkk
Description: We demonstrate agents entering a plane and finding their seats.
We show the results of combining global and local planning and collision avoidance algorithms. In particular, we combine A* as a global path planner with the social forces model for avoiding collisions when agents get too close without deviating too much from the path.
Playlist URL: https://www.youtube.com/playlist?list=PLvNi45_IRtsx033FJz2fIDGHSYws4x6nV
Course 1: Narrow Squeeze (Wall-Squeeze)
URL: https://www.youtube.com/watch?v=OBk-daUFsXw
Description: We demonstrate agents passing each other in a narrow passage squeezing through an even narrow obstacle.
Course 2: Double Squeeze (Double-Squeeze)
URL: https://www.youtube.com/watch?v=tvl1Hs6JapU
Description: We demonstrate agents rotating around each other to pass a narrow passage.
Course 3: Doorway (Doorway-Two-Way)
URL: https://www.youtube.com/watch?v=nLLCkrkLouM
Description: We demonstrate agents rotating around each other to pass a narrow passage.
URL: https://www.youtube.com/watch?v=owM22xCNgrA
Description: Here we demonstrate a large number of agents passing through a narrow space.
URL: https://www.youtube.com/watch?v=gmwFyTqkPoc
Description: We demonstrate an agent solving a maze.
Course 6: Crowd Crossing (Crowd_Crossing)
URL: https://www.youtube.com/watch?v=ZJsqOTvudNo
Description: We demonstrate a large object traverse a large crowd of small agents flowing in the perpendicular direction.
Course 7: Roundabout (Hallway-Four-Way-Rounded-Roundabout)
URL: https://www.youtube.com/watch?v=bqqW0Qhwzi0
Description: We see agents participating in a round-about in order to cross an intersection.
Course 8: Hallway (Hallway-Two-Way)
URL: https://www.youtube.com/watch?v=9VX2aUEbv9w
Description: We demonstrate agents traversing a hallway in two directions.
Course 9: Office Evacuation (Office-Complex)
URL: https://www.youtube.com/watch?v=AabTuTdZcpU
Description: We demonstrate agents evacuating a complex office layout.
Course 10: Exit Plane (Plane_Egress)
URL: https://www.youtube.com/watch?v=2tob9-DuJa0
Description: We demonstrate having agents exiting a plane.
Course 11: Enter Plane (Plane_Ingress)
URL: https://www.youtube.com/watch?v=LhP6hqM6Mkk
Description: We demonstrate agents entering a plane and finding their seats.
Part 2
We choreograph a "show" combining A* planning, social forces, and curve calculations (for agents and the camera)! We are simulating movement in a large, “real world” library. There are a relative large number of agents that start at different points inside and outside of the building. Agents experience hallways, simple maze-like conditions, and narrow/squeeze-like openings. Some agents use specified, computed curves (distinct colors) and others use A* + social forces (blue agents). The camera is also animated with a specified curve.
URL: https://youtu.be/mXx2vehlU0M
We choreograph a "show" combining A* planning, social forces, and curve calculations (for agents and the camera)! We are simulating movement in a large, “real world” library. There are a relative large number of agents that start at different points inside and outside of the building. Agents experience hallways, simple maze-like conditions, and narrow/squeeze-like openings. Some agents use specified, computed curves (distinct colors) and others use A* + social forces (blue agents). The camera is also animated with a specified curve.
URL: https://youtu.be/mXx2vehlU0M
Part 3
We make a "best of" showcase video of all our work over the last semester.
URL: https://youtu.be/DbNcHeMJFjU
We make a "best of" showcase video of all our work over the last semester.
URL: https://youtu.be/DbNcHeMJFjU
Wednesday, November 25, 2015
Implementing Behaviors in Unity
Overview: To this point, we've worked with only basic character animations and interactions. In this project, we focus on creating more immersive and complex animations. Namely, we focus on how to model more advanced interactions between characters and other characters and between characters and their environment. We do so using a tool called the "behavior tree".
In this example project, we have a number of people attending an exhibition at an art gallery. We highlight some of the actions in the following video:
In this example project, we have a number of people attending an exhibition at an art gallery. We highlight some of the actions in the following video:
An interactive demo is available at: Behavior Tree Demo (Art Gallery)
Expanded Details:
This is a short Unity "game" demonstrating the capabilities of our behavior implemented using the KADAPT and Final IK libraries. We have tried to make the behavior non-linear, scalable, and immersive. Multiple characters interact with each other, and the primary agent in this story, who we like to call Green (can you guess why?) makes some random choices and interacts with various characters in different ways. The overall story arc is that there is an art exhibition and the speaker is mingling with the guests. At the start, everyone shows up and the speaker thinks and decides what he wants to do first. During the middle he mingles and does various things. At the end he speaks to his close friend, who he won't start without speaking to, and then he gets up and speaks. The guests move close to listen.
Another playable character is also included. This agent, imaginatively called Red, is actually controllable by the player using the WASD (or arrow) keys. He uses a secondary behavior which makes him follow a small green disc around using pathfinding and the animations provided for the AI characters. He serves as the interactive element of the game not only in that he moves around, but also because the behavior is altered by his location near the end of the narrative. In fact, the player can control when or even if the narrative ever reaches its end.
Functionality and Implementation: We've included several functionalities/affordances and made some new ones as well. Some of the functions the agents can use are:
ApproachAndWait: Move to location and wait (included)
ST_Body_Animation: Run animations (adapted from tutorial)
ST_Hand_Animation: Run animations (adapted from tutorial)
ST_Face_Animation: Run animations (adapted from tutorial)
ST_LookAt: An agent turns to look at some GameObject, including other agents.
ST_Face_Off: Two agents simultaneously turn to face each other.
ST_Shake_Hands: Often fails during runtime, but not always. Function left in but not used.
ST_Speak_Pair: Two agents make gestures at each other to emulate speaking. This function goes through a list for each one using NonLinearSelect.
ST_Speak_Pair (looped): Same as before but allows for control of how many times it loops.
ST_Speak_Multiple: Currently only used for 3 agents, but can easily be extended for more. We planned to make it more complex but ended up not being able to implement the necessary functions (see looping through nearest agents problem).
ST_Get_Ball: Agent goes for and picks up a ball regardless of where it is using IK. Requires that the agent be positioned beforehand.
ST_Throw_Ball: Agent throws ball in general direction of a GameObject using IK.
ST_Press_Button: Agent presses a button using IK. Must be positions beforehand similar to ST_Get_Ball.
ST_Use_Ball_Event: Something of a "sub-behavior". Agent goes to ball's location, picks it up, and throws it. The pitcher and catcher interact.
ST_Interaction_Event: Another small collection of actions. Just two agents interacting and high-fiving.
ST_Follow: An agent will go to the location of a GameObject indefinitely.
In order to implement the ball interactions we created a new node for handling object movement in Ball.cs. This is a simple implementation that simply allows the agent to throw the ball in some GameObject's direction at a certain speed.
Behavior Tree: The included control nodes we primarily used were Selector, Sequence, SequenceParallel, SequenceShuffle and DecoratorLoop. We also implemented a new Control Node of our own. We placed this in NonLinearSelect.cs. This allows us to randomly select one sequence/action from a list and run it. This is useful because in order to make the game non-linear, it's useful to often make decisions randomly. It's even more non-linear if it doesn't just shuffle them, but actually doesn't do certain things. We use this at certain decision points as well as for conversation gestures. Selector goes through sequences but not selecting with a random range like NonLinearSelect. Sequence goes through affordances, and SequenceParallel is useful for running multiple sequences simultaneously. SequenceShuffle goes through all affordances randomly. Finally, DecoratorLoop is used to embellish sequences.
Extra Credit Attempts: We tried to add some things not strictly called for in the assignment such as extra affordances, the interactive element being a secondary behavior with a controllable AI agent, and a complex behavior with numerous points of decision. Our focus was telling a story while still being as non-linear as possible.
Known bugs and failures: We made some attempts which didn't quite work out. This includes giving agents a line of sight/detection radius for other agents, allowing agents to interact automatically as they move past each other. Also, sometimes the IK handler acts up when an agent presses a button during runtime. We aren't sure why this is happening, as our code works fine for handling the ball interactions. The most noticeable problem is that sometimes the animations cut off or look strange/jerky. We couldn't figure out how to fix this, or whether it might be something to do with KADAPT or how we were using it. Given the limited time, we couldn't get to know the library well enough to fix this problem.
Behavior Tree (click to enlarge):
Friday, November 13, 2015
A4: Path Finding
A4: Path Finding
Group
Members:
1) Zachary Daniels - zad7
2) Nilay Chakraborty - nc541
3) Zooraze Tariq - zt32
4) Tanvi Borkar - tab251
A* is a graph search algorithm. It consists of two parts for each node in the graph: the g-value (the distance from the initial position/ number of steps) and the h-value (a heuristic estimate of the distance from the goal). These are combined to give the f-value (f = g + h). If h is admissible and consistent, a* is guaranteed to produce an optimal path. The closer the heuristic is to the true distance to goal (assuming the heuristic is consistent and admissible), the fewer the number of nodes that need to be expanded. In this project, we apply the A* algorithm to planning paths in the "SteerLite" world. We show the affect of using different heuristics (Euclidean distance and Manhattan distance), different tie-breaking strategies, penalizing different types of movements (should we treat diagonal steps the same as horizontal and vertical steps?), and different weights in the weighted variant of A*.
Recordings
Part 1:
Euclidean/Search
1:
Euclidean/Search
2:
Manhattan/Search
1:
Manhattan/Search
2:
Part 2:
Euclidean/Search
1/Larger g:
Euclidean/Search
1/Smaller g:
Euclidean/Search
2/Larger g:
Euclidean/Search
2/Smaller g:
Manhattan/Search
1/Larger g:
Manhattan /Search
1/Smaller g:
Manhattan /Search
2/Larger g:
Manhattan /Search
2/Smaller g:
Part 3:
Euclidean/Search
1/Higher Diagonal Cost:
Euclidean/Search
2/Higher Diagonal Cost:
Manhattan/Search
1/Higher Diagonal Cost:
Manhattan /Search
2/Higher Diagonal Cost:
Part 4:
Euclidean/Search
1/Weight 2:
Euclidean/Search
2/Weight 2:
Euclidean/Search
1/Weight 4:
Euclidean/Search
2/Weight 4:
Euclidean/Search
1/Weight 8:
Euclidean/Search
2/Weight 8:
Manhattan/Search
1/Weight 2:
Manhattan /Search
2/Weight 2:
Manhattan /Search
1/Weight 4:
Manhattan /Search
2/Weight 4:
Manhattan /Search
1/Weight 8:
Manhattan /Search
2/Weight 8:
Observations for Parts
Part 1:
Testcase
|
Manhattan
|
Euclidean
|
||
Solution Length
|
Nodes Expanded
|
Solution Length
|
Nodes Expanded
|
|
Search 1
|
38
|
398
|
26
|
282
|
Search 2
|
89
|
2014
|
75
|
2150
|
As seen from
the observations, manhattan distance as a heuristic expands more nodes as
compared to the euclidean distance and the optimal path returned by the
Manhattan heuristic is longer than the one returned by Euclidean.
Part 2:
Testcase
|
Manhattan
|
Euclidean
|
||||||
Smaller G
|
Larger G
|
Smaller G
|
Larger G
|
|||||
Solution Length
|
Nodes Expanded
|
Solution Length
|
Nodes Expanded
|
Solution Length
|
Nodes Expanded
|
Solution Length
|
Nodes Expanded
|
|
Search 1
|
38
|
398
|
38
|
325
|
26
|
282
|
26
|
282
|
Search 2
|
89
|
2014
|
89
|
1817
|
75
|
2151
|
75
|
2150
|
In case of
the Manhattan heuristic, the smaller G value for tie breaks behaves in the same
way as no tie breaks. When larger G values are used for tie breaks, less nodes
are expanded but the solution is of the same length.
For
Euclidean heuristics, the number of expanded nodes and length of solutions is
same for both the smaller and the larger G values. This is also same as the
case where no tie breaks are used.
Part 3:
Testcase
|
Manhattan
|
Euclidean
|
||
Solution Length
|
Nodes Expanded
|
Solution Length
|
Nodes Expanded
|
|
Search 1
|
38
|
325
|
29
|
302
|
Search 2
|
89
|
1815
|
82
|
2278
|
In case of
Manhattan heuristics, the length of the solution is the same as part 1, but the
number of nodes expanded is less than part 1.
However for
Euclidean distances, the length of the solution and number of expanded nodes is
greater than part 1.
Part 4:
Testcase
|
Manhattan
|
Euclidean
|
|||
Weight
|
Solution Length
|
Nodes Expanded
|
Solution Length
|
Nodes Expanded
|
|
Search 1
|
2
|
38
|
283
|
26
|
194
|
4
|
44
|
201
|
33
|
159
|
|
8
|
44
|
171
|
35
|
110
|
|
Search 2
|
2
|
89
|
837
|
75
|
996
|
4
|
89
|
400
|
76
|
415
|
|
8
|
89
|
269
|
77
|
187
|
For both the
Manhattan and the Euclidean heuristics, as the weight increases, the number of
nodes expanded decreases. The length of the solution is almost the same for
both of these heuristics, with increase in weight.
Quantitative Analysis
for Weighted A*
For A* the
evaluation function f(n) is given by,
f(n) = g(n)
+ h(n)
where g(n) → is the actual cost of reaching the node n from the
start node,
h(n) → is the
estimated cost of reaching the goal from the node n
A* gives the
best or optimal path when the heuristic cost h(n) is admissible and consistent.
When h(n) is less than the actual cost of reaching the goal from n, the
algorithm will give the optimal path, but in doing so, it will expand more
nodes. When h(n) is equal to the actual cost of reaching the goal from node n,
the algorithm will always give the best path, making the algorithm faster. But
this scenario is not always possible. When h(n) is greater than the actual cost
of reaching the goal from node n, our heuristic is no longer admissible. This
means that we are not guaranteed to get the optimal path. But in doing so we
expand less number of nodes, making the algorithm faster.
In weighted
A* we make the evaluation function more dependent on the heuristic function. As
the weight increases, the f(n) becomes more and more dependent on h(n). When
the weight is increased, the number of nodes expanded decreases, and the paths
may become sub-optimal. This behavior is depicted in the readings observed by
running weighted A* using two types of heuristics, Manhattan and Euclidean
distances. The readings are recorded in the table given below:
Weight
|
No. of Expanded Nodes
|
Heuristic Used
|
Testcase
|
2
|
283
|
Manhattan
|
search1
|
2
|
837
|
Manhattan
|
search2
|
4
|
201
|
Manhattan
|
search1
|
4
|
400
|
Manhattan
|
search2
|
8
|
171
|
Manhattan
|
search1
|
8
|
269
|
Manhattan
|
search2
|
2
|
194
|
Euclidean
|
search1
|
2
|
996
|
Euclidean
|
search2
|
4
|
159
|
Euclidean
|
search1
|
4
|
415
|
Euclidean
|
search2
|
8
|
110
|
Euclidean
|
search1
|
8
|
187
|
Euclidean
|
search2
|
We will try
to explain the above observations with the help of an example. Suppose that we
have two types of terrains in a game: flat and mountains. Suppose we can move
directly in a straight line through the mountains to reach our goal, but this
requires a lot of effort. Alternatively, we also have the option of traversing
around the mountain on the terrain which requires much less effort. Let the
cost of moving within the flat terrain be 1 and moving through the mountainous
terrain be 3. Hence the search along the mountainous terrain is going to be 3
times more expensive than that around the flat terrain. This means we can move
three steps in the flat terrain for every one step in the mountainous terrain.
Suppose the total effort to move along the terrain path is less than the total
effort to traverse the mountains. When finding a path using an admissible
heuristic, the algorithm will always try to find a route along the flat terrain
even if an alternative “shorter” (but more expensive) path along the
mountainous terrain exists. It will spend time trying to avoid mountainous
routes and finding alternatives for them. In this case if we increase the
weight of the heuristic by some factor, the time spent in finding the
alternatives around mountainous parts will decrease, and as the weight
increases the mountainous parts will also be considered because we pay less
attention to the effort required traverse a path and place more emphasis on our
estimated distance to goal. This path will not be the optimal one. But the
algorithm will definitely give results faster because it will expand fewer
nodes.
By
increasing the heuristic we try to bias the search towards the goal. If
heuristic is very large, the algorithm behaves like a Greedy Best First search,
which expands the nodes that will take it closer to the goal.
Hence
weighted A* is faster and expands less nodes as w increases, but it does not
(necessarily) give the optimal path.
Subscribe to:
Posts (Atom)