Showing posts with label AI Development Process. Show all posts
Showing posts with label AI Development Process. Show all posts

Movement in Game Worlds : The Environment and Space

The Environment and Space

Let's focus on the game world where movement takes place. The environment is planned by the designer, created with modeling tools by the graphics artist, and updated and displayed within the game by the programmer's engine. A wide variety of technologies can be used along this production pipeline, but it is mostly inconsequential from the AI's perspective.

What's important is the information contained implicitly within the world. Most environments are split into two components: structure and detail. After discussing them separately, this section looks at how they combine to define space.

Sum of Its Parts

The information about the environment provided to the players can be conceptually divided into two main components: structure and detail. This is more than just a theory, because most game engines strongly distinguish the two for efficiency reasons. The physics simulation handles the structure, and the detail is for graphics rendering. In the real world, of course, the boundaries between them are much more debatable, so we should consider ourselves lucky to be dealing with computer games!

  • The structure is the part of the environment that can physically affect movement. Naturally this includes the floor, walls, doors; chairs, tables, and other furniture; trees, roads, and bridges; and so on. This list is not exhaustive, and should include all the elements in the world that are mostly static—those the players cannot trivially push aside.

  • As for the detail, it consists of the "cosmetic" part of the environment: the things game characters cannot collide with (or if so, insignificantly)—books, kitchen utensils, or bits of food; grass, shrubs, and small ledges; among many others.

There is one important omission to notice from the preceding definitions. What happens to living creatures and other mobile entities? Although not necessarily created with the same tools as the rest of the game world, they too are arguably part of the environment. We could argue that game characters have properties of both the structure and the detail. However, some developers (and robotics researchers) believe that they should be treated separately, as an entirely different set. Generally in games, the set of living creatures ends up being forced into one category or the other (for instance, players are detail that's ignored during movement, or players are part of the environment structure that movement takes into account).

Essentially, the problem is about combining these three components of the environment together to create an understanding of space. We want to understand space as best possible to develop high-quality movement. Considering moving creatures as either detail or structure can have a negative effect on the movement, especially when the problem has not been identified beforehand.

Fortunately, we have the luxury of being able to decide how to handle living creatures as we design the AI. In a deathmatch, for example, it's fine to ignore the other animats for movement; they can be blown up with a rocket if they get in the way! In cooperative mode, however, a separate category is needed for nonplayer characters (NPCs) so that they can ask each other to move. Finally, other players can be considered as obstacles in large crowds.

Each of these three interpretations is a way of understanding the environment, but most importantly space—which relates to the NPC AI.

Defining Space

Fundamentally, the game world describes space. As the shape of the environment, the structure plays an important role; as far as the physics engine is concerned, all the movement is defined by the structure. However, both human players and AI characters cannot always match this physical knowledge of space; it will be extremely difficult, if not impossible, for them to understand the environment perfectly.

In some cases, when a simple world is stored in an explicit fashion (for instance, a 2D grid), understanding it can be a manageable task. As the design becomes more complex, many elements combine to define the environment in an intricate fashion (for example, a realistic 3D world). However, no matter how well the environment is defined or how accurate its physical rules are, it is not necessarily as clear to players and NPC.

  • Human players have to assimilate the environment based on imperfect visual information. The detail of the environment plays an important part visually.

  • AI characters usually get a simplified version of this world before the game starts (offline), in both an imprecise and incomplete fashion (for instance, waypoints as guides). Alternatively, the environment can be perceived and interpreted online (as humans would).

For all intents and purposes, space is an abstract concept that cannot be fully understood. Different techniques will have varying degrees of precision, but all will be flawed. Don't see this as a problem, just accept it and embrace it; perfection is overrated! This lesson was learned in robotics thanks to the wave of nouvelle AI robots.

After space has been figured out, it's possible to determine which parts are free space and which can be considered solid. This implicitly defines all movement: what is possible and what isn't. Only then can intelligent movement be considered.

Ref : By Alex J. Champandard

Reactive Techniques in Game Development

Reactive Techniques in Game Development
Just like the behaviors, reactive—or reflexive—techniques have many advantages. In fact, reactive AI techniques have been at the core of most games since the start of game development. As explained, these techniques are often enhanced to provide non-determinism, but this can often be simplified into a deterministic mapping.

Advantages in Standard Game AI
The major advantage of reactive techniques is that they are fully deterministic. Because the exact output is known given any input pattern, the underlying code and data structures can be optimized to shreds. The debugging process is also trivial. If something goes wrong, the exact reason can be pinpointed.

The time complexity for determining the output is generally constant. There is no thinking or deliberation; the answer is a reflex, available almost immediately. This makes reactive techniques ideally suited to games.

Success stories of such approaches are very common, and not only in computer games. Historically, these are the most widely used techniques since the dawn of game AI:

Scripts are small programs (mostly reactive) that compute a result given some parameters. Generally, only part of the programming language's flexibility is used to simplify the task.

Rule-based systems are a collection of "if...then" statements that are used to manipulate variables. These are more restrictive than scripts, but have other advantages.

Finite-state machines can be understood as rules defined for a limited number of situations, describing what to do next in each case.

These standard techniques have proven extremely successful. Scripts essentially involve using plain programming to solve a problem (see Chapter 25, "Scripting Tactical Decisions"), so they are often a good choice. Rule-based systems (covered in Part II) and finite-state machines (discussed in Part VI) can be achieved with scripting, but there are many advantages in handling them differently.

Advantages for Animats
The reactive approach also has benefits for animats, improving learning and dealing with embodiment extremely well.

Embodiment
With embodiment, most of the information perceived by the animat is from the surroundings, which needs to be interpreted to produce intelligence. Reactive behaviors are particularly well-suited to interpreting this local information about the world (as animals have evolved to do).

Also, it's possible to make the reactive behaviors more competent by providing the animat with more information about the environment. Thanks to their well-developed senses, humans perform very well with no advanced knowledge of their environment. Instead of using better AI, the environment can provide higher-level information—matching human levels of perception. Essentially, we make the environment smarter, not the animats.

Learning
Most learning techniques are based on learning reactive mappings. So if we actually want to harness the power of learning, problems need to be expressed as reactive.

Additionally, it's often very convenient to teach the AI using the supervised approach: "In this situation, execute this action." Reactive behaviors are the best suited to modeling this.
Ref : By Alex J. Champandard

AI Development Process

AI Development Process
Developing AI is often an informal process. Even starting out with the best of intentions and ending up with the perfect system, the methodology will often be left to improvisation, especially in the experimental stages. In fact, most developers will have their own favorite approach. Also keep in mind that different methodologies will be suited to different problems.

In a somewhat brittle attempt to formalize what is essentially a dark art, I developed the flow chart shown in Figure 2.1. This flow chart describes the creation of one unique behavior. Jumping back and forth between different stages is unavoidable, so only the most important feedback connections are drawn.

Figure 2.1. An interpretation of the dark art that is the AI development process.


Hopefully, Figure 2.1 makes the process seem less arbitrary! At the least, this method is the foundation for the rest of this book. We'll have ample opportunities to explore particular stages in later chapters, but a quick overview is necessary now.

Outline
The process begins with two informal stages that get the development started:

The analysis phase describes how the existing design and software (platform) affects a general task, notably looking into possible restrictions and assumptions.

The understanding phase provides a precise definition of the problem and high-level criteria used for testing.

Then, there are two more formal phases:

The specification phase defines the interfaces between the AI and the engine. This is a general "scaffolding" that is used to implement the solution.

The research phase investigates existing AI techniques and expresses the theory in a way that's ready to be implemented.

All this leads into a couple of programming stages:

The development phase actually implements the theory as a convenient AI module.

The application phase takes the definition of the problem and the scaffolding, using the module to solve the problem.

This is where the main testing loop begins:

The experimentation phase informally assesses a working prototype by putting it through arbitrary tests that proved problematic in previous prototypes.

The testing phase is a thorough series of evaluations used only on those prototype candidates that have a chance to become a valid solution.

Finally, there is a final postproduction phase:

The optimization phase attempts to make the actual implementation lean and mean.

These phases should not be considered as fixed because developing NPC AI is a complex and unpredictable process. Think of this methodology as agile—it should be adapted as necessary.

Iterations
These stages share many interdependencies, which is unavoidable because of the complexity of the task itself (just as with software development). It is possible to take precautions to minimize the size of the iterations by designing flexible interface specifications that don't need changing during the application phase.

The final product of this process is a single behavior. In most cases, however, multiple behaviors are required! So you need to repeat this process for each behavior. There is an iteration at the outer level, too, aiming to combine these behaviors. This methodology is in spirit of nouvelle AI, which is directly applicable to game development.

Luckily, it's possible to reduce the number of outer iterations by using a clever AI architecture design, which is discussed in the next chapter.
Ref : By Alex J. Champandard