Skip to main content

Realistic Agent Movement in Dynamic Game Environments

Resource type
Date created
2005-05-31
Authors/Contributors
Author: Graham, Ross
Abstract
Realistic and intelligent agent movement remains one of the greatest challenges for games developers. Path-finding strategies are usually employed as a means of allowing an agent to navigate from one part of the game world to another. Typically the game world is stored in a pre-processed structure called a map which contains all of the relevant geometry. In order to cut down the search space for the path-finder, this map is broken down and simplified. The path-finder then uses this simplified representation to determine the best path from the starting point to the desired destination. These simplified representations correspond to graphs, and algorithms such as Dijkstra and A* [6] can then be employed to quickly find paths between the nodes in the graph. The graph used is based on a pre-processed static representation of the game world. However, the assumption that the geometry of the game remains static during the course of play is not necessarily valid anymore. This difficulty is then compounded by the fact that the agent typically has no real-time awareness of the environment around it. This situation results in a number of problems for path-finders, each of which we now outline. The increasing use of physics engines opens up the possibility of completely dynamic game geometry, where the players and agents can physically alter the structure of the game world as play progresses, by knocking over walls for example [2]. Dynamic obstacles can therefore be introduced that block previously accessible nodes on the graph. When this happens the agent will still believe it can walk along this path due to its reliance on the preprocessed static graph. Techniques have been developed that improve the agents’ reactive abilities when dynamic objects obstruct a path. These work well in some situations, but generally the agent will not react until it has collided with an obstacle, as it has no sense of awareness until a trigger is set when a collision occurs. Another problem is the rigid and unrealistic movement that occurs when the agent walks in a straight line between nodes. This is caused by the dilemma which arises in the trade off between speed (the less number of nodes to search the better) and realistic movement (the more nodes, the more realistic the movement). This has been improved in some games by applying spline curves for smoothing out paths along nodes. A further problem is implementing tactical path-finding. This involves not just finding the shortest route but also the route that offers the most cover, or avoids unnecessary encounters with undesirable game entities. One approach is to modify the cost heuristic of A* to take line of fire from other enemy agents into account [7]. This has the benefits of adding realism to the game and also presents a less predictable opponent for the human player. The drawback is that due to the added cost, the search space becomes much larger for A* to process. This approach also assumes that the threat remains static during the paths duration, which is seldom the case. Generally game developers add in special case code to deal with these problems but typically this is only applicable to that particular game [1]. This paper examines these problems and introduces the concept of applying learning techniques to solve them in a new novel way [4]. Our solution to this problem is to provide the agent with a means of navigating its own way around the world, rather than simply relying on routes provided by the game engine. In order to accomplish this the agent requires two important abilities. Firstly it needs to be able to examine its environment in some way in order to know what is in front of it and around it, thus giving it real-time awareness. Secondly it needs some way of processing this information to accomplish tasks such as steering around obstacles that have been placed in its path. The first ability is achieved by embedding sensors in the agent. This is a concept borrowed from robotics where ultrasound or infrared sensors are common. We adapt this idea for our agents by casting rays which test for intersections with the game geometry. In this way information can be provided to the agent pertaining to the proximity of objects within its field of vision. The second ability is being able to process this information in some way. Our solution to this problem is to furnish each agent with an Artificial Neural Network (ANN) [3] which takes the sensor information as input. The ANN is a learning algorithm that we have trained to exhibit the behaviour we want – namely that the agent has the ability to steer around objects. We describe how this provides robust steering behaviour that is tolerant of noisy data. Another advantage of this approach is that the processing required is minimal and hence multiple agents can be imbued with this behaviour without causing a major strain on the CPU. This is used in conjunction with a traditional path-finding algorithm. The algorithm works out a path for the agent but the sensors and the ANN are responsible for moving the agent along that path, and are capable of adapting the path to steer around obstacles or other dynamically introduced geometric changes. Our system is implemented using the Quake 2 [5] game engine and we have extensively tested these ideas against more traditional approaches to path-finding. The game engine gives us a test bed whereby a genetic algorithm [6] is used in real-time to evolve the weights of the neural network. Since the sensors are influencing the agents movement in real-time, as it walks from node to node, it tends to gradually veer away from obstacles thus resulting in less rigid movement. Giving the agent this real-time awareness also compliments the tactical elements of pathfinding as the agent can be alerted in real-time to imminent threats. Our results indicate that this approach is extremely useful in the dynamic environments that are becoming the norm in modern computer games. References [1] Cain, Timothy, "Practical Optimizations for A*", AI Game Programming Wisdom, Charles River Media, 2002 [2] Eberly,David,H, "Game Physics", Elsevier, Inc, 2004 [3] Fausett, Laurene, "Fundamentals of Neural Networks Architectures, Algorithms, and Applications", Prentice-Hall, Inc, 1994. [4] Graham, Ross,"Neural Networks for Real-time Pathfinding in Computer Games", In proceedings of ITB journal, Issue 9 (2004) [5] www.idsoftware.com/games/quake/quake2/ [6] Russel, Stuart., Norvig, Peter., "Artificial Intelligence A Modern Approach", Prentice-Hall, Inc, 1995 [8] Van der Sterren, William,. "Tactical Path-Finding with A*", Game Programming Gems 3, Charles River Media, 2002
Description
Contact: Ross Graham, Insititute of Technology Blanchardstown, grahamrp@gmail.com
Copyright statement
Copyright is held by the author(s).
Language
English

Views & downloads - as of June 2023

Views: 0
Downloads: 1