Movement in Game Worlds : Testing Conditions
Because we are aiming for state-of-the-art NPC intelligence, nothing less than complex 3D worlds will be satisfactory—like those humans deal with. This needs to be an environment frequently used by human players, and not a simplified test bed for the sake of AI development. By keeping the experimentation process up to standard with the realism of actual game worlds, fewer problems will arise due to fundamentally flawed assumptions. Keeping the test "levels" and scenarios diverse will unconsciously prevent these problems. Such guidelines need to be followed throughout the development of the movement.
To be realistic, it should include doors and corridors, stairs and ladders, narrow ledges and dead ends, floors with gaps and irregular terrain, closed rooms and open spaces, jump pads and teleporters.
Now that the environments have been explained and defined, our focus can be set on the actual movement. As discussed, the world has a very important impact on the movement, but it takes certain capabilities from games characters to use it to their advantage.
Movement in Game Worlds : Handling Movement
Movement in Game Worlds : Types of Game Worlds
Types of Game Worlds
It's surprising how many games rely on movement as their most fundamental concept. They range from chiefly strategic logic games such as Chess or Diplomacy, or the more entertaining alternatives such as Horses or Downfall. Let's not forget computer games of course; first-person shooters, real-time strategy, or even role-playing games would be extremely dull without motion!
Despite virtual worlds being widespread, they come in many varieties. Notably, there are worlds of different dimensions (for instance, 2D or 3D), with different sizes and precision. Some worlds are based on grids (discrete), and some are free of restrictions (continuous).
Conceptually, there is a big difference between these variations, which translates into the feel and style of the gameplay. Behind the scenes in the engine, distinct data structures and implementation tricks are used in each case, but this discussion focuses on the consequences for the AI.
On Dimensions
Some games can take place in 2D levels like top-down worlds (Warcraft II), or side views with moving platforms and smooth scrolling (Super Mario Brothers). Alternatively, the world itself can be fully 3D, with different floors in buildings (Half Life), or even tunnels (Descent 3).
It's important to note that the dimensions of the game world are independent from the rendering. For example, the original Doom is in fact a 2D world with the floors at different levels. Even recent 3D first-person shooters have a lot in common with Doom. This leads to the assumption that the underlying properties of the environment are very similar. Indeed, much of the technology is applicable in 2D and 3D.
Most academic research projects deal with top-down 2D AI movement algorithms, for example [Seymour01], which provides a nice amalgamation of past research. The same goes for the most of the figures here; they show the problem projected onto the floor. This is primarily because 2D movement is simpler than its 3D counterpart.
As a justification, one often read that "2D generalizes to 3D," so it's fine to focus on the simpler alternative. With an algorithm defined for two dimensions, the same concept is re-applied again to an additional dimension! This is mostly the case, although special care needs to be taken to devise a solution that scales up with respect to the number of dimensions.
For movement in 3D games, this simplification is also acceptable. In realistic environments, gravity plays an important role. All creatures are on the floor most of the time, which almost simplifies the problem to a 2D one. It is not quite 2D because the floor surfaces can superimposed at different heights (for instance, a spiraling staircase); this type of world is often known as 2.5D, halfway between the complexity of two and three dimensions.
In the process of applying, or generalizing, our 2D solution to 3D, watch out for a couple of pitfalls:
-
Complex environments often exhibit tricky contraptions (for example, jump pads or lifts), which are quite rare in flat worlds.
-
Instead of naively applying an original algorithm, it can be tailored to make the most of the problem at hand.
These factors influence the design the solution.
Discrete Versus Continuous
Another property of game worlds is their precision, in both time and space. There are two different approaches, discrete and continuous events (in space or time), as shown in Figure 5.1. A discrete variable can accept only a finite number of values, whereas a continuous variable theoretically takes infinite values.
Figure 5.1. Two types of game environments observed top-down. On the left, space is continuous, whereas the world on the right is based on a grid (discrete).
Conceptually speaking, there is little difference between a continuous domain and a discrete one. Indeed, when the discretization is too fine to notice, the domain appears continuous. This doesn't apply mathematically, of course, but on modern computers, there are hardware limitations. To simulate continuous media, current software must select an appropriate discrete representation.
Using examples for space and time in game worlds shows how it's not only a game design issue, but also a fundamental element for the AI. (It's an independent AI design decision.)
Space
The game environment can be limited to cells of a grid; it's discrete if there are a finite number of grid coordinates. Continuous environments have unlimited locations because they are not restricted to grids.
To store the coordinates in the world, data types must be chosen. These are usually single precision floating-point numbers, which essentially allocate 32 bits to discretize those dimensions. Although there are benefits to using a double representation (64 bits), single representations are enough to make each object seem like it can take every possible position and orientation in space.
Time
Actions can be discrete in time, taking place at regular intervals—like in turn-based strategy games. Alternatively, actions can be continuous, happening smoothly through time as in fast-pace action games.
Similar floating-point data types can be chosen to represent this dimension. However, this choice is slightly more complicated because the programmer can't "manipulate" time as easily. Conceptually, single processors can only execute one logical process at a time—despite being able to perform multiple atomic operations in parallel. Consequently, it's not possible for each object in the world to be simulated continuously. The closest the computer can get is an approximation; a portion of code determines what happened since the last update (for example, every 0.1 seconds). So despite being very precise, a floating-point number will not be entirely necessary to represent a value in time (see Figure 5.2).
Figure 5.2. On the top, a continuous timeline with events happening at any point in time. On the bottom, the same timeline, but discrete. The time of each event is rounded to the closest mark every two seconds.
Conversions
Fundamentally, because of the limitations of computers, each of these dimensions can be considered as discrete—although it does not appear that way. Regardless, these data types can be converted to one another by simple mapping; this involves scaling, converting to an integer by rounding, and rescaling back. Arbitrary discretizations can thereby be obtained.
In practice, it's possible for the world to have one type of discretization and use a different discretization for the AI. For example, mapping continuous onto discrete domains enables us to exploit similarities in the AI design, and reuse existing AI routines (for instance, standard grid-based A* pathfinding). Of course, we may not necessarily want to perform the conversion for many reasons (for example, compromising behavior quality or sacrificing the complexity of environment). But at least AI engineers have the luxury to decide!
Movement in Game Worlds : The Environment and Space
The Environment and Space
Let's focus on the game world where movement takes place. The environment is planned by the designer, created with modeling tools by the graphics artist, and updated and displayed within the game by the programmer's engine. A wide variety of technologies can be used along this production pipeline, but it is mostly inconsequential from the AI's perspective.
What's important is the information contained implicitly within the world. Most environments are split into two components: structure and detail. After discussing them separately, this section looks at how they combine to define space.
Sum of Its Parts
The information about the environment provided to the players can be conceptually divided into two main components: structure and detail. This is more than just a theory, because most game engines strongly distinguish the two for efficiency reasons. The physics simulation handles the structure, and the detail is for graphics rendering. In the real world, of course, the boundaries between them are much more debatable, so we should consider ourselves lucky to be dealing with computer games!
-
The structure is the part of the environment that can physically affect movement. Naturally this includes the floor, walls, doors; chairs, tables, and other furniture; trees, roads, and bridges; and so on. This list is not exhaustive, and should include all the elements in the world that are mostly static—those the players cannot trivially push aside.
-
As for the detail, it consists of the "cosmetic" part of the environment: the things game characters cannot collide with (or if so, insignificantly)—books, kitchen utensils, or bits of food; grass, shrubs, and small ledges; among many others.
There is one important omission to notice from the preceding definitions. What happens to living creatures and other mobile entities? Although not necessarily created with the same tools as the rest of the game world, they too are arguably part of the environment. We could argue that game characters have properties of both the structure and the detail. However, some developers (and robotics researchers) believe that they should be treated separately, as an entirely different set. Generally in games, the set of living creatures ends up being forced into one category or the other (for instance, players are detail that's ignored during movement, or players are part of the environment structure that movement takes into account).
Essentially, the problem is about combining these three components of the environment together to create an understanding of space. We want to understand space as best possible to develop high-quality movement. Considering moving creatures as either detail or structure can have a negative effect on the movement, especially when the problem has not been identified beforehand.
Fortunately, we have the luxury of being able to decide how to handle living creatures as we design the AI. In a deathmatch, for example, it's fine to ignore the other animats for movement; they can be blown up with a rocket if they get in the way! In cooperative mode, however, a separate category is needed for nonplayer characters (NPCs) so that they can ask each other to move. Finally, other players can be considered as obstacles in large crowds.
Each of these three interpretations is a way of understanding the environment, but most importantly space—which relates to the NPC AI.
Defining Space
Fundamentally, the game world describes space. As the shape of the environment, the structure plays an important role; as far as the physics engine is concerned, all the movement is defined by the structure. However, both human players and AI characters cannot always match this physical knowledge of space; it will be extremely difficult, if not impossible, for them to understand the environment perfectly.
In some cases, when a simple world is stored in an explicit fashion (for instance, a 2D grid), understanding it can be a manageable task. As the design becomes more complex, many elements combine to define the environment in an intricate fashion (for example, a realistic 3D world). However, no matter how well the environment is defined or how accurate its physical rules are, it is not necessarily as clear to players and NPC.
-
Human players have to assimilate the environment based on imperfect visual information. The detail of the environment plays an important part visually.
-
AI characters usually get a simplified version of this world before the game starts (offline), in both an imprecise and incomplete fashion (for instance, waypoints as guides). Alternatively, the environment can be perceived and interpreted online (as humans would).
For all intents and purposes, space is an abstract concept that cannot be fully understood. Different techniques will have varying degrees of precision, but all will be flawed. Don't see this as a problem, just accept it and embrace it; perfection is overrated! This lesson was learned in robotics thanks to the wave of nouvelle AI robots.
After space has been figured out, it's possible to determine which parts are free space and which can be considered solid. This implicitly defines all movement: what is possible and what isn't. Only then can intelligent movement be considered.
Ref : By Alex J. Champandard
FEAR: A Platform for Experimentation
Game AI generally takes a lot of work to implement—not directly for the AI techniques, but to integrate custom AI, or even middleware, within the game. This is an unavoidable consequence of current engine design, and can be difficult to deal with when starting out in AI development.
This book provides a real game for testing the AI, and not just a toy environment or command prompt. For this, the open source FEAR project is used, integrated with a commercial first-person shooter game engine. Some other game engines use similar frameworks to assist the AI development (but they're not as flexible). FEAR provides some helpful facilities, including the following:
-
World interfaces for the AI to interact with the game engine (physics simulation and logic)
-
Modules that implement AI functionality on their own or by depending on other modules
-
Flexible architectures that enable engineers to assemble arbitrary components together
-
Tools to create animats and their source files using minimal programming
As the combination of these elements and more, FEAR provides an ideal foundation for the examples in this book—and many other prototypes, thanks to its flexibility.
Modules
Modules are essentially implementations of AI techniques. When the modules are instantiated within the architecture, they're called components. For example, there is only one rule-based system implementation (module), but different rule-based behaviors can be used for movement and tactical decisions (components).
Interfaces
The module interfaces are purposefully made high level. Only the core functionality is exposed by the interfaces (for instance, movement requests). This approach emphasizes the purpose of the module, which helps understanding and the design.
Such interfaces have many advantages in terms of modularity—just as in software development in general. Essentially, the implementation can be completely abstracted from the other components, which reduces dependencies.
Many AI techniques have similar capabilities, such as pattern recognition, prediction, function approximation, and so on. This functionality can often be expressed as one concise interface, but implemented in different ways.
Data Loading
Special interfaces are used to store data on disk. These interfaces are also specified formally, so the meta-programming tools generate functions that save and load data from disk. The low-level details of the storage are thereby handled automatically. The programmer is only responsible for postprocessing the data as appropriate after it has been loaded.
Dependencies
Each module may import specific interfaces. Conceptually, this means the module depends on other implementations to provide its own functionality.
At runtime, this translates into nested components. The internal components are called on when their output is needed to produce the final result. For example, nested components could be used to implement a subsumption architecture for navigation, or even a voting system handling combat—as explained in the preceding chapter.
Flexible Architecture
FEAR supports entire architectures, defined as hierarchies of components. A hierarchy is the most generic type of architecture, because both monolithic and flat can be considered a specific subtype.
Each component has access to its children. (This access is called the scope of the component.) FEAR's dynamic C++ framework initializes the components automatically during initialization, so none of the standard code to import interfaces needs to be written manually (as with DirectX).
At the time of writing, all forms of arbitration (independent, suppression, combination, sequence) are supported indirectly; the programmer can implement each as modules in a customized fashion.
Creating an Animat
The process of creating an animat in FEAR is threefold, as described in this section.
Description of the Architecture
Create a file with the description of the architecture. This specifies which components the brain of the animat needs and other recursive dependencies for those components.
Generating the Source Code
To create all the project files, we need to process this architecture definition with the toolset. The resulting animat should compile by default, effectively creating a brain-dead animat!
Adding behaviors to the animat involves adding code to a few files—usually those with "to-do" comments inside them. Other generated files are managed by the toolset (stored in the Generated folder of the project) and should not be edited manually; their content may disappear when the tools are used again.
Preparations for the Game
The compiled files need to be accessible by the game engine. This involves copying the brain dynamic link library (DLL) into the correct directory, along with a descriptive file to tell the framework about the architecture required. All the data files for the animats are stored in their own directory, too.
Installation Note
Each of the animats can be downloaded separately, or in a complete package. The full source is available as well as the compiled demos. It's best to start by running the demos to get a feel for the platform. The instructions for installing the tools and building each animat are detailed on the web site at http://AiGameDev.com/.
Ref : By Alex J. Champandard


Architectures Designing (game) AI
Designing (game) AI is about assembling many weak components together. The components provide functionality for each other and communicate together to collectively solve the problem. This set of nested components is known as an architecture.
These architectures are reactive when they are driven by sensory input, and decide on output actions deterministically. Such architectures are very common for systems that need to react in a timely fashion, whether to external requests or stimuli from the environment (for instance, robots and washing machines).
Components
A component can be understood as a black box with a mystery AI technique inside. Alternatively, the component may be built as a subsystem of smaller components. Components interact with others via their interfaces, using inputs and outputs.
Internally, as mentioned previously, there is no theoretically difference between a deliberative component and a reactive component. This correspondence applies for the interfaces, too! Any deliberative planning component can be considered as reactive. All the necessary information is provided via the interfaces in the same fashion; only the underlying technology changes.
This is a tremendous advantage during the design phase, because the underlying implementation can be disregarded. Then, if a reactive technique is not suitable, a deliberative component can be inserted transparently.
Another consequence of the black box paradigm is that you can use other components during implementation. These components are nested inside others. This is the epitome of modularity in AI.
Organization
Naturally, there are many ways to combine components together. This is the purpose of architectures, because they define the relationship between components. Figure 3.4 shows three example architectures with different internal organizations:
Figure 3.4. Three example architectures with different internal organizations. From left to right, monolithic, flat, and hierarchical architectures.

Monolithic architectures include only one component.
Flat architectures have many components in parallel.
Hierarchical models have components nested within others.
As an example of a hierarchical architecture for games, the brain may be built as a collection of behaviors (for instance, hunt, evade, patrol), which are components within the brain. Each behavior in turn may depend on components for moving and shooting. This is a three-level hierarchy.
In general, selecting the organization of an architecture is about problem complexity. Simple problems will manage with monolithic architecture, more sophisticated problems may need flat architectures, whereas hierarchical architectures can handle almost any problem by breaking it down. (Chapter 21, "Knowledge of the Problem," and Chapter 28, "Understanding the Solution," discuss these points further.)
Decomposition
Instead of thinking of an architecture as a combination of components (bottom-up), it can be understood in a top-down fashion: "How can this problem be split up?" This concept of decomposition is central to AI development in general.
There are many types of decompositions. The idea is to split the problem according to certain criteria—whichever proves the most appropriate for solving it! This applies equally well to software design as to the creation of game AI:
Structural decomposition splits the solution according to the function of each component. For example, there may be a component responsible for movement, and another for handling the weapon. These are different functions.
Behavioral decomposition is based on the distinctive activities of the system. These can be understood as different modes such as hunting, fleeing, collecting ammo, or celebrating. Different components would handle these behaviors.
Goal decomposition uses the overall purpose of the system to determine how to split it into components. In game AI, goals depend on the behavior (for instance, finding a weapon). Although goals do not provide clear policies for decomposing the problem, they are always a criteria in the decisions [Koopman95].
Naturally, the decomposition can happen on multiple levels using many different criteria. For example, the initial problem may be decomposed as behaviors, then each behavior may be expanded as different functionality, using different criteria known as a hybrid decomposition, as opposed to using one criteria throughout the architecture (or pure decomposition).
Behavioral decompositions are very appropriate for computer games. The functional decomposition is also used in this book, as we develop common abilities that can be reused by the nonplayer character (NPC) AI.
Arbitration
Given a set of subcomponents, how are they connected together? Specifically, how are all the outputs interpreted to form the output of the component itself? There are four different ways of doing this:
Independent sum essentially connects each component to different outputs so no clashes can occur.
Combination allows the outputs of different components to be blended together to obtain the final result.
Suppression means that certain components get priority over others, so weaker ones are ignored.
Sequential arbitration sees the output of different components alternating over time.
There really is no right or wrong method of arbitration. Each of these methods is equally applicable to computer games.
Examples
There are many different combinations of architectures. Figure 3.5 shows two common examples to illustrate the issue.
Figure 3.5. Two popular reactive architectures. On the left, the subsumption architecture with its horizontal layers. On the right, a voting system that combines the votes of four nested components.

Subsumption
The subsumption architecture is a behavior-based decomposition, using a flat organization with suppression on the outputs [Brooks86].
The system can be seen as a set of horizontal layers. The higher the layer, the greater the priority. Layers can thereby subsume the ones beneath by overriding their output. This architecture is explained further, and applied to deathmatch behaviors in Chapter 45, "Implementing Tactical Intelligence."
Voting System
Voting systems generally use a functional decomposition, with a flat organization using combination to merge the outputs together. Chapter 25 uses a voting system for weapon selection.
The system can be interpreted as a set of distributed components that are all connected to a smaller component responsible for counting votes. The output with the most votes becomes the output for the global output.
Ref : By Alex J. Champandard
Reactive Techniques in Game Development
Just like the behaviors, reactive—or reflexive—techniques have many advantages. In fact, reactive AI techniques have been at the core of most games since the start of game development. As explained, these techniques are often enhanced to provide non-determinism, but this can often be simplified into a deterministic mapping.
Advantages in Standard Game AI
The major advantage of reactive techniques is that they are fully deterministic. Because the exact output is known given any input pattern, the underlying code and data structures can be optimized to shreds. The debugging process is also trivial. If something goes wrong, the exact reason can be pinpointed.
The time complexity for determining the output is generally constant. There is no thinking or deliberation; the answer is a reflex, available almost immediately. This makes reactive techniques ideally suited to games.
Success stories of such approaches are very common, and not only in computer games. Historically, these are the most widely used techniques since the dawn of game AI:
Scripts are small programs (mostly reactive) that compute a result given some parameters. Generally, only part of the programming language's flexibility is used to simplify the task.
Rule-based systems are a collection of "if...then" statements that are used to manipulate variables. These are more restrictive than scripts, but have other advantages.
Finite-state machines can be understood as rules defined for a limited number of situations, describing what to do next in each case.
These standard techniques have proven extremely successful. Scripts essentially involve using plain programming to solve a problem (see Chapter 25, "Scripting Tactical Decisions"), so they are often a good choice. Rule-based systems (covered in Part II) and finite-state machines (discussed in Part VI) can be achieved with scripting, but there are many advantages in handling them differently.
Advantages for Animats
The reactive approach also has benefits for animats, improving learning and dealing with embodiment extremely well.
Embodiment
With embodiment, most of the information perceived by the animat is from the surroundings, which needs to be interpreted to produce intelligence. Reactive behaviors are particularly well-suited to interpreting this local information about the world (as animals have evolved to do).
Also, it's possible to make the reactive behaviors more competent by providing the animat with more information about the environment. Thanks to their well-developed senses, humans perform very well with no advanced knowledge of their environment. Instead of using better AI, the environment can provide higher-level information—matching human levels of perception. Essentially, we make the environment smarter, not the animats.
Learning
Most learning techniques are based on learning reactive mappings. So if we actually want to harness the power of learning, problems need to be expressed as reactive.
Additionally, it's often very convenient to teach the AI using the supervised approach: "In this situation, execute this action." Reactive behaviors are the best suited to modeling this.
Ref : By Alex J. Champandard
AI Development Process
Developing AI is often an informal process. Even starting out with the best of intentions and ending up with the perfect system, the methodology will often be left to improvisation, especially in the experimental stages. In fact, most developers will have their own favorite approach. Also keep in mind that different methodologies will be suited to different problems.
In a somewhat brittle attempt to formalize what is essentially a dark art, I developed the flow chart shown in Figure 2.1. This flow chart describes the creation of one unique behavior. Jumping back and forth between different stages is unavoidable, so only the most important feedback connections are drawn.
Figure 2.1. An interpretation of the dark art that is the AI development process.

Hopefully, Figure 2.1 makes the process seem less arbitrary! At the least, this method is the foundation for the rest of this book. We'll have ample opportunities to explore particular stages in later chapters, but a quick overview is necessary now.
Outline
The process begins with two informal stages that get the development started:
The analysis phase describes how the existing design and software (platform) affects a general task, notably looking into possible restrictions and assumptions.
The understanding phase provides a precise definition of the problem and high-level criteria used for testing.
Then, there are two more formal phases:
The specification phase defines the interfaces between the AI and the engine. This is a general "scaffolding" that is used to implement the solution.
The research phase investigates existing AI techniques and expresses the theory in a way that's ready to be implemented.
All this leads into a couple of programming stages:
The development phase actually implements the theory as a convenient AI module.
The application phase takes the definition of the problem and the scaffolding, using the module to solve the problem.
This is where the main testing loop begins:
The experimentation phase informally assesses a working prototype by putting it through arbitrary tests that proved problematic in previous prototypes.
The testing phase is a thorough series of evaluations used only on those prototype candidates that have a chance to become a valid solution.
Finally, there is a final postproduction phase:
The optimization phase attempts to make the actual implementation lean and mean.
These phases should not be considered as fixed because developing NPC AI is a complex and unpredictable process. Think of this methodology as agile—it should be adapted as necessary.
Iterations
These stages share many interdependencies, which is unavoidable because of the complexity of the task itself (just as with software development). It is possible to take precautions to minimize the size of the iterations by designing flexible interface specifications that don't need changing during the application phase.
The final product of this process is a single behavior. In most cases, however, multiple behaviors are required! So you need to repeat this process for each behavior. There is an iteration at the outer level, too, aiming to combine these behaviors. This methodology is in spirit of nouvelle AI, which is directly applicable to game development.
Luckily, it's possible to reduce the number of outer iterations by using a clever AI architecture design, which is discussed in the next chapter.
Ref : By Alex J. Champandard
Required Background
Before discussing the process of creating such animats in games, it seems appropriate to list what skills are required to develop AI. This book assumes the reader has a few years of programming experience, but creating AI is an interdisciplinary process. The AI part of the software sits at the crossroads between the game engine and the AI data; the AI engineer also mediates with the other programmers and designers.
Programming
An important skill needed by an AI developer is programming knowledge. AI can get relatively complex in places, but a reasonable knowledge in programming can help significantly. In fact, most programmers would be able to produce rudimentary NPCs without much AI knowledge. That said, programming is rarely the bottleneck of an AI developer.
Note
In this book, the theory behind the algorithms is described in pseudo-code for the sake of simplicity. As such, it's possible to implement them in almost any language. Because the code available on the web site is C++, most of the programming idioms focus on that language.
Computer Science
Most programming skills are accompanied with elementary knowledge of computer science. Being comfortable with data structures (for example, lists, trees, and graphs) and basic algorithms is a tremendous help for creating AI, too. Don't worry, however; the necessary principles are covered by the book.
Mathematics
Math is essential improving as an AI programmer. Just like 3D programmers need knowledge of geometry to push the application programming interface (API) to its limits, mathematical understanding enables AI programmers to integrate cutting-edge theory from academia, and to optimize the theoretical aspects of each algorithm. It is possible to avoid the math by relying on pseudo-code and example implementations, but a more permanent solution requires that you not shy away from theory. This book gives ample opportunities for dedicated readers to understand the theory and make a step toward academic papers.
Software Engineering
Designing an intelligent system that can control a creature in a complex 3D world is no easy task. Applying common design patterns to the problem certainly helps simplify the system. This book explains the design patterns commonly used in AI and how they can be adapted to different problems.
Game Engine Architecture
Preliminary steps need to be climbed before the actual AI development itself can start. This generally involves preparing the engine architecture so that it can support AI. In most cases—especially when human players are already supported—this job is straightforward.
As a good framework is in place, it's actually possible to code AI without knowing too much about the underlying game. Chapter 4, "FEAR: A Platform for Experimentation," presents the framework used as the basis of this book, which is extremely useful from an educational or experimental point of view.
In a professional environment, groups of developers work together to build the game architecture. Experienced developers (those who know best?) can thereby assist the integration of the AI in design and implementation. AI developers can rely on other programmers to assist them when necessary.
Ref : By Alex J. Champandard
A Modern Approach
Traditionally, AI is viewed as a code fragment that manipulates data. These small programs are generally known as agents. These agents are like software systems; they have layers. One central processor acquires information, processes it, deliberates a bit more, and executes some actions. Acting on behalf of the user, agents solve narrow problems with a human quality.
This view is problematic for building large and intelligent systems; the theory scales up poorly, and does not transfer from lab examples to other domains. Nouvelle AI rejects such focused AI, instead believing that true intelligence is about performance in the real world.
The 1980s witnessed a revolution based in robotics that eventually shook most of AI. The ideas, initially from Rodney Brooks (1986 and 1991), proposed using a different model for intelligence, allowing working systems to be built with more suitable methodologies [Brooks86, Brooks91].
This leads to studying embodied systems situated in realistic environments (such as robots or game characters). To solve the problems that occur in practice, new approaches to AI are needed (such as the behavior-based approach).
Brooks advocates that no central processor has to deliberate every move; instead, the system is distributed into behaviors that react instantly to their environment. Using this reactive approach, full systems are built up incrementally, by testing each set of components.
This revolution has continued since, notably influencing a group of researchers to focus on the simulation of adaptive behavior (SAB). The first conference was organized back in 1990 by the International Society for Adaptive Behavior [ISAB02].
"Every two years, the Animals to Animats Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow natural and synthetic agents (animats) to adapt and survive in uncertain environments."
Animats are essentially synthetic creatures that live within a virtual environment. Because they are embodied, they interact with the world using only their body—making them fully autonomous. Animats can also adapt to their environment by using a variety of learning algorithms. But are these approaches suitable to games?
Animats in Games
Many game players, and even developers, would consider animats the "proper" way of dealing with AI NPCs. Wouldn't it be impressive to have each bot in the game as an accurately simulated creature? As far as game AI techniques are concerned, this nouvelle game AI approach is the opposite of standard techniques.
Are Animats Applicable to Games?
The major goal of game developers is believability; the accuracy of the simulation itself is not a concern. Still, animats have much to offer to computer games. By simulating the creatures accurately, fewer aspects of the behaviors need to be "faked." Because the AI is genuine, it can handle situations unforeseen by designers.
Already, similar (diluted) ideas are starting to leave their mark on the industry. Recent trends in game AI lean toward embodiment, notably in the simulation of sensory systems (Thief), the addition of noise to some actions (Quake 3), and even perceptual honesty (Black & White).
By extrapolating this progression, the result is fully embodied animats. This will certainly happen within a few years, but whether this is three or ten years away is anyone's guess. In the mean time, preliminary research in synthetic creatures shows that properties of animats, such as embodiment, actually lead to more genuine behaviors, which in turn improves believability [Isla02, Blumberg01].
As far as software engineering is concerned, the animat approach has much to offer from a design point of view. Embodiment is an elegant way of modeling the role of the AI in the game engine. The formal definitions of interfaces between the body and the brain is good practice (notably separating the AI from the logic and simulation). As for developing AI behaviors, animat and behavior-based research has revealed many ways of dealing with experimentation, such as incrementally building the AI system.
How Do We Create Animats Effectively?
How can such radical ideas be applied within game engines? Is it even feasible given time and computational constraints? As a matter of fact, it's more than feasible; different aspects of animats have already been demonstrated in popular games. This is the crucial observation; it's possible to integrate properties of animats into the standard AI design, which enables us to compromise between typical game AI approaches and the animat approach.
To date, no genuine animats have been shipped in commercial implementations, but this isn't too far in the future. Some animat prototypes have closely matched the skill level of standard game bots. In some cases, animat prototypes prove to be more reliable and realistic than game bots.
Instead of games using standard agents, animats can in fact be more efficient in many respects. The interaction of an animat with its environment is formalized so it can be optimized in the most appropriate format (for example, passing messages, function calls, shared variables). Learning techniques can minimize the processing power used to perform a particular behavior.
A Healthy Compromise
The animat approach has many benefits, regardless of policies on learning or embodiment. These advantages include improvements in the design and in the development pipeline. Naturally, genuine undiluted animats have the potential to be extremely successful within games, and the rest of this book investigates this noble goal. However, far from being on an idealistic crusade, this discussion attempts to identify places where the animat approach isn't appropriate in games, while trying to extract its advantages.
The remainder of this chapter investigates further these issues by tackling the two major characteristics of animats separately (embodiment and learning), looking into their potential benefits and pitfalls.
Embodiment
Embodiment is a different way of dealing with in-game creatures. Typically, NPCs are just agents: "smart" programs that manipulate data, like chatbots or web spiders. Such entities are purely virtual, whereas embodied agents live in a simulated world and have a synthetic body. Regardless of whether they are 2D sprites or complex 3D models, these bodies cannot do some things. Indeed, the bodies are influenced by the physical rules of the world.
Definition
An embodied agent is a living creature subject to the constraints of its environment.
Because the bodies of animats are physically constrained, the actions of their brains are limited. In general, the possible actions that can be executed by the body—and hence the AI—are restricted to the subset of actions consistent with the laws of the simulation. These actions often turn out to be physically plausible. However, embodiment generally does not limit what the AI can achieve; it just restricts how it is done.
Some characters in games represent human players who get to control the bodies. Many other characters are synthetic, similarly controlled by the computer. The AI itself can be understood as the brain, and the body offers the means for interaction with the game's physics and logic.
Consider a classical example: a standard agent can change its position itself to reach any point in space. An animat—with embodiment—needs to move itself relatively to the current position, having to actually avoid obstacles. It will not even have the capability to update its position directly. Nowadays, many games do this, effectively enforcing the simplest form of embodiment.
Actually simulating the body enables developers to add biologically plausible errors to the interaction with the environment. Errors might be present when information is perceived from the environment and in the actions. For example, animats could have difficulty perceiving the type of characters in the distance. There could even be parametric noise in the turning action, so aiming is not perfect (as with humans). Including such biologically plausible details allows the NPC to behave more realistically.
Motivation
Increasingly, agents with full access to the game data are becoming inconvenient. Having no restrictions on the reading and writing of data often results in internal chaos within the design of the engine. Because there is no formalized interface, the queries for information are left to the client (AI). Developers are actually starting to impose restrictions on these queries, notably limiting the subset of information available to the AI, such as preventing bots from seeing through walls.
For large games (such as massively multiplayer online games), it's essential to develop such hooks for the AI in the game engine. Using formal interfaces is essential because doing so allows the server to be distributed so that agents can reside on different machines if necessary. The AI can thereby be fully separated from the game logic and from the simulation of the world (physics).
So it seems formal interfaces, such as those that the AI Interface Standards Committee is attempting to define [AIISC03] will become increasingly important. Whether these can be standardized is another issue, but embodiment provides useful guidelines for drafting custom interfaces as the exchange of information between the body and the brain. Sensory data flows from the body to the brain, and actions are passed from the brain to the body.
This book anticipates the trend and uses such formal interfaces. In terms of code complexity, major improvements result from separating the acquisition of the data from its interpretation. As for efficiency, using embodiment often allows better optimizations.
Technology
With a formalized interface, the engineer can easily decide on the most appropriate format to communicate data to the AI—and do so mostly transparently using mechanisms such as messages, callbacks, abstract function calls, shared variables, and so on. Because a standard interface exists, its implementation can be particularly optimized for speed using the most appropriate mechanism.
Implementing embodiment efficiently requires a few common techniques to be used. These tricks are the major reasons why formal interfaces can actually outperform an AI implementation with direct access to the data:
Lazy evaluation means that no information is gathered from the world until it is actually requested by the AI. This prevents redundant computation.
Event-driven mechanisms mean that the AI does not need to check regularly for data. When relevant information is available, the AI is notified in an appropriate fashion.
Function inlining still allows the interfaces to be separated, but also optimized out by the compiler (if necessary). This is suitable for small functions, but larger ones benefit from being separate.
Custom optimizations can be used often to speed up the queries. By using spatial partitions of the world, only necessary information can be checked by visibility to gather the information.
Batching refers to collecting many queries or actions so that they can be processed later. Within the engine, the implementation can then decide the best way to deal with them to maintain memory coherence.
Used appropriately, these techniques can significantly reduce the cost of exchanging information between the AI and the engine, and make formal interfaces and embodiment a desirable property.
Learning
Learning is the second property of animats and characteristic of nouvelle game AI. Instead of the designer crafting fixed behaviors, the process is automated by adaptation and optimization techniques.
Definition
Regardless of their actions in the world, living creatures are constantly presented with a flow of sensory data. Biological animals are capable of assimilating this information and using it to adapt their behavior. There are no reasons why animats are not capable of learning; they too are presented with a stream of information from the environment, which they can interpret.
"Learning is the acquisition of new knowledge and abilities."
This definition identifies two kinds of learning: information and behavior. As far as the result is concerned, there is little difference between the two. Indeed, it's often possible to learn knowledge as a behavior; conversely, behaviors can be expressed as knowledge. So intrinsically, both these subtypes of learning can be considered identical in outcome.
In practice, a distinction exists between the two. A part of the animat does not change (phylogenetic), and another part can be adapted (ontogenetic). If the AI system itself is changed at runtime, the adaptation is called direct, and indirect otherwise [Manslow02] (Again, there's a fine line between the two.)
Motivation
Two main scenarios encourage the use of learning in computer games. Different terms are used for each of these cases—optimization and adaptation, respectively—during the development and within the game:
Optimization is about learning a solution to a known puzzle. This is essentially used to simplify the development process (offline) because learning might produce a better answer to the problem in less time than the manual approach.
Adaptation is about learning in unknown situations, and how best to deal with them. This scheme requires the AI to continuously update itself—to deal with different player styles during the game, for example (online).
Fundamentally, these scenarios may be considered as the same problem, too! Indeed, the exact same techniques can be used to perform either. However, both learning schemes are suited to different domains, implying different AI techniques are more appropriate.
The design of the AI can exploit these different types of learning, too. Optimization is often much easier to integrate into the development pipeline as a useful tool for creating believable characters. Adaptation, on the other hand, has repercussions within the game, so it requires a few more precautions in the design.
Technology
Many AI techniques can be used to perform both varieties of learning: neural networks, decision trees, genetic algorithms, reinforcement learning, classifier systems, and so forth. These different solutions are discussed throughout this book. From a conceptual point of view, there are the following four categories of algorithms:
Supervised learning algorithms need to be presented with examples. Apart from assimilating facts or behaviors, they can recognize patterns in the training samples. This allows the learning to generalize, and perform well on unseen examples.
Reinforcement learning evaluates the benefit of each action using a scalar number, instead of providing specific examples. This reward feedback is used to adapt the policy over time.
Evolutionary approaches provides scalar feedback for a sequence of actions, evaluating the fitness of episodes instead of giving a continuous reward.
Unsupervised learning does not rely on direct training. Instead, the designer provides high-level guidance, such as a performance metric.
Naturally, there are often ways to integrate these approaches—or even use one approach to solve the other (for example, self-supervision). These design issues come into consideration after the problem is identified.
Given techniques that learn (either supervised or not), the animats can be taught in different ways:
Teaching involves humans providing a set of examples that help the animat to behave until it's managed to understand what to do.
Imitation allows the animat to copy another player, who is usually human. It can thereby learn its behavior from a third-party experience.
Shaping sets up successive trials from which the animat can learn. After the animat learns to accomplish simple tasks, more complex ones are presented.
Trial and error places the animat in its environment and expects it to learn by trying out all the different approaches on its own.
Each of these methodologies can be followed during the development stage or during the actual game. Although these different approaches are presented in a practical fashion throughout this book, Chapter 35, "Designing Learning AI," specifically covers general technical and design issues.
For Skeptics
The key to successfully integrating learning within games is to use it with consideration. Some things are just not suited to learning. There will always be a need for "static" AI, even if it just acts as the glue between adaptive components.
The benefits of learning are undeniable! Learning enables the developer to save time whenever possible, and to add to the game's appeal by bringing ambitious designs to life.
However, it's debatable whether learning is capable of performing reliably within games. One of the major advantages of learning techniques is that they can be combined with other solutions. This enables the designer to modify or override the results of the learning. This book covers ways to indirectly control the learning, but directly supervise the outcome.
Finally, the fact that techniques can be applied to learning facts or behaviors, online or offline, and with so many different methodologies undoubtedly means that one flavor is suitable for every purpose.
Ref : By Alex J. Champandard
Traditional Approach
How do developers tackle this problem generally? Creating game AI is certainly not the most formal of processes in software development; it requires many ad-hoc modifications and hours of empirical evaluation. The games industry is somewhat immature as a whole, but game AI is probably the furthest behind.
Until recently, AI code took barely a few hundred lines of code, being hacked together with a couple of months to spare before the deadline. Since the turn of the millennium, AI has suddenly been propelled into the limelight and it is expected to scale up—often becoming a centerpiece of the design.
A few different aspects about the typical approach to creating game AI merit analysis, notably the integration with the engine, the design of the system, and the guidelines used throughout the development.
Integration and Design
Historically, in-game agents were simulated as part of the game logic. The 2D positions of the characters were updated at each iteration, like in Pac-Man. Since then, things have moved on slowly. Thankfully, the AI code now is generally separated from the logic. However, the agent is generally given direct access to the game data, free to extract whatever it needs. The separation is only a programming trick to simplify the codebase.
As for software design, AI subsystems are often created as separate libraries (to handle movement, for example). Naturally, these are created in different ways, depending on the development style. The most common approach in game development is the hands-on approach, where you incrementally build up the interface and required functionality from scratch. This might not sound formal—at least in terms of classical software design—but modern agile approaches acknowledge the benefits of such rapid iterations. This is certainly well suited to AI because a large amount of experimentation is required.
Beyond these modular components, the code can become somewhat unmanageable because the unexpected complexity of the task or the deadline pressures sometimes catch programmers. For example, the AI for Return to Castle Wolfenstein uses C function pointers to simulate a finite-state machine, an approach that quickly becomes almost impossible to understand by anyone but the original developer.
Guidelines
Because of the limited resources available, the only important guideline for AI developers is to cut corners wherever possible yet still achieve efficiency. The responsibility of the AI itself is to control characters in a realistic fashion. How this is achieved under the hood is irrelevant. Many applicable techniques could be borrowed from AI research, but few are used in practice. All the common techniques used in classical game AI (for instance, search and scripting) arguably have their roots in computer science instead.
Often, the combination of these two requirements (efficiency and realism) leads to simple AI solutions, such as scripts. This approach is entirely justifiable because it solves the problem in a way that designers can easily control.
Discussion
This typical approach to game AI has been finely tuned over the years and seems to have reached satisfactory levels. AI in games is competent enough to not stand out. The standard design and integration has immediate benefits (such as simplicity), but can prove inconvenient in many ways.
First, letting the AI access information in the game directly is both dangerous and unnecessary. Second, manually developing all the behaviors for the AI can be tedious using plain programming and causes an exponential growth in time required.
There is undoubtedly room for improvement over these standard approaches, as shown by recent progress in game AI. There is an underlying trend in these innovations.
Ref : By Alex J. Champandard
An Engineer's Perspective
AI in Game Programming
AI in Game Programming
In practice, the purpose of the AI system in the game engine is to control every aspect of the NPC. For example, within computer games—and not only first-person shooters—the AI must provide the following:
-
Primitive behaviors such as picking up items, pressing switches, using objects, performing purposeful gestures, and so on
-
Movement between areas of the game environment and dealing with obstacles, doors, or platforms
-
Decision making on a higher-level to decide which actions are necessary for the NPC to accomplish its tasks, and in what order
To develop systems capable of providing such control, a minimal amount of technology is necessary. Although standard programming techniques allow the implementation of intelligent NPCs, techniques from the field of AI can provide the following:
-
Elegant solutions for explicit control
-
Technology to support implicit control efficiently
Compared to standard scripting techniques, AI technology can be computationally more efficient and can generate better quality behaviors. Such improvements are possible thanks to various aspects of AI, discussed throughout this book:
-
AI techniques providing functionality such as motor control, pattern recognition, prediction, or approximation
-
Design patterns inserted at a system level to assemble these components together
-
Methodologies used to design and test behaviors within realistic environments
These AI methodologies allow the synthetic characters to come to life, but they also serve a particular purpose in games. Game development has common requirements that need to be taken into account when engineering the AI system. Two properties are important in games:
-
Entertainment— The AI can call upon the skills of human players. This involves providing them with various challenges and testing different abilities with increasing levels of difficulty. The AI also can play on emotions to increase entertainment value. The AI can trigger amazement by arranging cool events, or fright by building scary atmospheres.
-
Believability— The AI characters can improve immersiveness by doing their job in a way that does not distract attention from the mission of the game. As for realism, AI allows each individual NPC to behave in a plausible way that seems logically feasible to human players.
In summary, AI game development is about providing control to NPCs to produce entertainment (assisted by believability or realism). The next chapter takes the engineer's approach, using AI technology to assist in the creation of systems capable of such tasks.
Ref : By Alex J. Champandard
Designers Versus AI
Designers Versus AI
How do stronger AI technology and the potential for intelligent NPCs affect game design? It's clear that improvements in the field of AI have opened up possibilities for the design. In the near future, advancements will undoubtedly lead to other expansions in "design space."
However, intelligent NPCs are changing the methods of game design, too. These issues discussed in this section are causing tension within the development teams, and the subject is very topical. Clashes are a recurring theme—even among the very best developers.
Obstacles
Beyond any technical details, a major problem is making room for intelligent behavior in the design, and dealing with those behaviors:
-
Intelligent game characters can behave autonomously.
-
Designers need to control the behavior of NPCs.
An undeniable clash exists in these roles; do we actually need AI? If AI is present, do we need designers? The overlap lies between the designers' wishes and what the AI can provide. A reasonably intelligent NPC can challenge the authority of the designer!
Two Types of Games
Different attitudes toward control have led to two distinct varieties of video games. In the first variety, designers implement their vision in a top-down fashion, controlling every detail of the game. This is the explicit design approach. It's particularly common in games based on a single story line (Doom 3, Unreal 2).
When the design script is extremely detailed, such as in single-player scenarios, there is little need for AI techniques. It's even arguable whether NPCs are "intelligent" characters at all. (They do the same thing every time regardless of changes in the situation.) Standard programming or scripting can bring the design to life. The less detail there is in the story, the more AI technology is necessary.
The second type of game results from bottom-up design, whereby the AI and the environment combine and emerge into an interesting game. The key observation is that there is no master script. Working together, all the characters in the game create an interesting world that makes a game (that is, an emergent story line, as in Pizza Tycoon).
This is implicit design, because the story is not controlled directly by the designer, although each individual NPC behaves as told (low-level explicit control). If the NPCs are intelligent rather than scripted, the designer's control lies in the environment only!
Many games are built in the implicit fashion, but very few of these actually have a story line (for instance, SimCity, Transport Tycoon). The story line is whatever path the player decides to take. In these games, designers can't easily control how the AI comes together to affect the gameplay.
The combination of top-down and bottom-up approaches to game design is a topical problem in AI research. One practical solution that's already been used (in Grand Theft Auto III, for instance) is to alternate sequences of explicit control (to set up the story line), and implicit control (to allow the player to play it). In this case, the behavior of the NPC is overridden during the "cut scenes" and left to autonomous AI control otherwise.
What Conflict?
These distinct types of games reduce the overlap between the AI and the design. In general, the designer's job is to craft the game, from the low-level behaviors to the overall story line. Animating characters is a crucial part of this. It's the AI's responsibility to control the behavior of in-game characters, bringing the vision to life.
Conflict can arise when the designer wants explicit control of an intelligent NPC. In this case, the AI has to perform a particular action in a specific situation as told. This is the easiest problem to resolve—even with standard AI techniques—because the AI can be directly overridden (as in cut scenes).
Further conflict can arise when designers want the story line to emerge in a different fashion (for instance, get the characters to force the player into this situation instead). This cannot be resolved as simply because the control is implicit; there's often no easy way to tell the NPC directly what to do for a particular situation to emerge.
Designers of games with story lines seem to have particular issues with implicit control, not realizing the challenges presented to the AI programmer. Identifying the situation can help. As a rule of thumb, control is implicit when it requires more than one operation to achieve.
For example, making sure that there is another player—any player—behind the door when it opens is explicit control; it's just a matter of spawning a new entity into the game world. However, guaranteeing that teammate Marvin is behind the door when it opens is implicit control. Just moving Marvin would make the game world inconsistent. (He might have been visible at the time.) So he actually needs to go through the door, which takes more than one operation; it's implicit control.
The solution is for designers to agree with the AI programmer upfront about all types of implicit control required, and then to specify explicit control as necessary. The idea is that the AI system needs to be engineered specially to provide "handles" for implicit control, whereas explicit control is just a matter of overriding the system.
Setting up guidelines allows the AI developers to exploit the freedom they are given to build the AI system. Designers will make the most of the AI system with the agreed forms of control—harnessing their power to improve the gameplay.
Ref : By Alex J. Champandard
The State of Game AI
Remember the dawn of 3D graphics in games? Only a few companies were willing to take the plunge: the proverbial penguins. Some mastered the subject, but the games still had many bugs. Visual artifacts slowly disappeared, eventually becoming unacceptable. Nowadays, even when new technologies are introduced, they are integrated seamlessly; the development pipeline is well established. There is much experience in the field, assisted by a strong hardware industry.
In AI, we're still in the first stage. Common techniques that have been used over the years—such as scripted behaviors and A* pathfinding—could be compared to 2D graphics. Although improvements still need to be made, these two techniques have matured tremendously; scripted behaviors and A* pathfinding in most AI systems nowadays don't make obvious mistakes. This is an amazing feat. We've even reached a stage where such AI techniques shine in some games (for instance, Unreal Tournament 2003 and Return to Castle Wolfenstein).
However, AI (both kinds) is set to revolutionize the way we make games. AI technology is modernizing the development, and intelligent creatures are transforming game design.
Technological Revolution
Some companies have ventured into more advanced technology borrowed from the field of AI (for example, decision trees or reinforcement learning). In these applications, the superior AI generally achieves similar design goals, only the means of production change. (For instance, Colin McRae Rally 2 uses learning and neural networks, which means the AI doesn't have to be programmed manually [Hannan01].) Despite the standard gameplay design, the game engine needed to be adapted to accommodate the AI techniques and the development process (usually involving less scripting). This is the technological AI revolution in the trenches.
Nevertheless, a certain amount of skepticism still exists within the game-development community—and justifiably so. Why is better AI technology actually needed? Given a standard design, does it help the development in any way? The answer is that AI techniques are not a requirement per se. There's no need for awesome technology if it produces the same gameplay! In development, we could do fine without AI; in fact, most professional game programmers have the skill to do without any form of AI if they so desire.
However, AI technology has the potential to improve the development process by boosting efficiency, speeding up design and experimentation, and generally improving the quality of the final product—when chosen in the right context. We spend the rest of this book on this subject, learning from successful prototypes and making mistakes.
Design Revolution
Few other games use modern AI to actually push levels of NPC intelligence beyond designs possible with "standard" AI (that is, scripts and pathfinding). Together, the stronger AI techniques and adventurous designs have led to obvious improvements in gameplay. For example, Black & White's gameplay revolves around the interaction with an intelligent creature with learning abilities.
On the whole, these more intelligent creatures have had a great reception from the press and the public (including AI enthusiasts). Capturing this amount of attention has only been possible with ambitious designs, which the game community now seems to crave. This is the AI design revolution, and it's just starting.
There's a calm certainty that a golden age in AI NPC is looming, despite some hesitations about how to make it happen. There is little doubt that the AI revolution will be mainly design driven. The savings generated by a better AI production pipeline compare meekly to the lucrative market for AI games. This market in itself is enough to drive any progress.
The technology to bring these AI designs to life has been available for years—if not decades. Granted, the necessary processing power and experience have been lacking anyway. Only recently has it matured enough; nowadays, practical wisdom has increased dramatically and computational power is less of a problem. In this book, we explore such techniques capable of bringing AI designs to life efficiently.
Ref : By Alex J. Champandard
Computer Games and AI
Computer Games and AI
As mentioned previously, AI means two different things. Which kind of AI do we mean when we say "computer game AI"? What good is AI for computer games?
It's understandable that we want intelligent characters in our games because they add to the experience and improve gameplay. Intelligent NPCs make single-player games possible, and improve multiplayer experience, without having to rely on an existing community of (biological) people.
We want useful sidekicks, worthy deathmatch opponents, hordes of enemies that get shot in particularly entertaining fashion, and background characters that add depth to the game. Regardless of the game type—whether real-time strategy (RTS), first-person shooter (FPS), or massively multiplayer online game—intelligent NPCs are absolutely necessary to create the illusion of playing with other intelligent players.
Fundamentally, these examples revolve around synthetic characters. Because the essence of the problem is to develop a single NPC, that seems an obvious place to start (from an educational point of view). Focusing on one creature leaves vast quantities of processing power available, which provides a perfect test bed for experimentation.
Note
In AI, a smart entity is known as an agent. A system that handles more than one game unit in coordination is known as a multi-agent system. Developing multiple agents involves scaling down the AI enough so that it's feasible to scale up the number of NPCs. In essence, it's about using simpler AI using less memory and processing power—although this is a challenge in its own right!
From an external point of view, NPCs need only to display a certain level of intelligence. This is one key realization; computer game AI requires the result. It doesn't really matter how NPC intelligence is achieved, as long as the creatures in the game appear believable. So AI technology is not justifiable from this outsider's point of view, because standard software engineering techniques could be used equally well to craft the illusion of intelligence (for instance, scripting).
Ref : By Alex J. Champandard
Overview of Artificial Intelligence
Overview of Artificial Intelligence
To a great majority of the population, AI is the brain behind powerful cybermachines—the kind found in sci-fi films. To software developers, it's a buzzword for technology that repeatedly failed to deliver on its promises throughout the twentieth century. To academics, it's a seemingly endless source of challenges and excitement.
But how is AI relevant to game developers?
Artificial intelligence has two separate meanings; both types are beneficial to game development:
-
First, AI is a form of intelligence that has been artificially re-created using machines.
-
Second, AI is a set of academic techniques, research methods, and problems that fall into one sub-branch of science.
Machine Intelligence
Historically, it seems "intelligent" is a term mankind found to describe itself. Intelligence is just a human ability that distinguishes us from animals or plants. Nowadays, intelligence is commonly used as a distinguishing trait among humans; everyone implies that "intelligent" means an especially clever person.
Universal Ability
Conceptually speaking, a generic form of intelligence undoubtedly exists. Humans and animals have small subsets of this ability, particular instances of universal intelligence. Humans seem to have inherited a larger portion of this universal skill. However, our biological intelligence lacks some characteristics of universal intelligence (for instance, exhaustiveness and neutrality).
Most computer-science researchers assume that biological intelligence can be reproduced, and that intelligence is not exclusively human. This statement essentially gifts machines with a subset of universal intelligence. AI can therefore be seen as an artificial counterpart of the intelligence produced by our biological brains. Naturally, engineering produces different results than evolution, explaining why AI has different properties than human intelligence (for instance, thoroughness). AI is another instance of universal intelligence.
It's difficult for us to understand universal intelligence because we have few advanced examples. However, we can try to define human intelligence.
Definition of Intelligence
For lack of a better definition, intelligence is a set of skills that allows humans to solve problems with limited resources [Kurzweil02]. Skills such as learning, abstract thought, planning, imagination, and creativity cover the most important aspects of human intelligence.
Given this wide variety of abilities, there is no unique problem to put them all to the test. Animals are intelligent in some ways: they are capable of surviving and managing their time, for instance. Insect colonies can adapt quickly to their environment to protect their nest. Popular IQ tests are very specific and require training more than the gift of "intelligence." These tests measure what is known as narrow intelligence.
Each problem requires different abilities. We're particularly interested in a problem that can become surprisingly complex—behaving autonomously within a realistic virtual environment. Playing games isn't just about wrist power and rudimentary reflexes! Games present interesting challenges because most humans find entertainment in solving such problems.
Computer game AI is an artificial version of this human ability. AI controls computer characters purposefully, meaning that actors and background cast don't have to be hired (as they must be in films). We'll refer to this first interpretation of AI as nonplayer character (NPC) intelligence, which implies that it's machine intelligence.
Field of Science
The second interpretation of AI is as a set of technologies. The definition on the introductory page of the AI Depot has served well over the past couple years:
"Artificial intelligence is a branch of science that helps machines find solutions to complex problems in a more human-like fashion. This generally involves borrowing characteristics from biological intelligence, and applying them as algorithms in a computer-friendly way."
AI algorithms can be applied to practically anything—they're not just limited to re-creating human intelligence. For instance, they could be applied to managing a production chain, or perhaps to pattern recognition in medical data. The common properties of AI techniques and biological intelligence (for instance, learning or abstraction) make these techniques part of the field of AI.
As a discipline, AI sits at the crossroads of many subjects (for instance, computer science, psychology, and mathematics). These subjects all share a significant body of common knowledge. Given such a wide variety of influences, it's difficult to say what belongs to AI and what doesn't. It seems to vary from year to year, depending on the popularity of each field. There is an increasingly large overlap with other disciplines anyway, which is a good thing; it reveals the maturity of the field and its consistency with other theories.
Historically, AI tended to be very focused, containing detailed problems and domain-specific techniques. This focus makes for easier study—or engineering—of particular solutions. These specific techniques are known as weak AI because they are difficult to apply outside of their intended domain.
This weakness of AI has become a roadblock—one that can't be driven around. Weak AI has been extremely successful in many domains, but human experts need to apply it manually. When trying to assemble techniques together to solve bigger problems, it becomes evident that techniques are too focused.
This is one reason why we need AI engineers. If AI were good enough, programmers wouldn't be needed. This is (at least) a few decades off, however; until then, we need humans to develop the systems. This is the case for AI technology in computer games, too; rest assured, programmers are still necessary!
Ref : By Alex J. Champandard