Showing posts with label Movement in Game Worlds. Show all posts
Showing posts with label Movement in Game Worlds. Show all posts

Movement in Game Worlds : Testing Conditions

Testing Conditions
Because we are aiming for state-of-the-art NPC intelligence, nothing less than complex 3D worlds will be satisfactory—like those humans deal with. This needs to be an environment frequently used by human players, and not a simplified test bed for the sake of AI development. By keeping the experimentation process up to standard with the realism of actual game worlds, fewer problems will arise due to fundamentally flawed assumptions. Keeping the test "levels" and scenarios diverse will unconsciously prevent these problems. Such guidelines need to be followed throughout the development of the movement.

To be realistic, it should include doors and corridors, stairs and ladders, narrow ledges and dead ends, floors with gaps and irregular terrain, closed rooms and open spaces, jump pads and teleporters.

Now that the environments have been explained and defined, our focus can be set on the actual movement. As discussed, the world has a very important impact on the movement, but it takes certain capabilities from games characters to use it to their advantage.

Movement in Game Worlds : Assumptions

Assumptions
Given knowledge from the game simulation, let's outline our assumptions. There is a physics system that takes care of applying the motor commands to the data structures representing each player. We'll suppose it is done with kinematics, in a physically plausible fashion. A physics system will also resolve any conflicts that happen within the simulation. This is the locomotion, which falls outside of the control of the AI.

We'll also assume that an animation system takes care of the low-level aspects of motion. Specifically, moving individual limbs and blending the appropriate animation cycles is handled transparently from the AI.

For the interface to be of any use to the animat, it's essential for the information returned to be consistent with the physics of the environment. Consequently, we'll assume the backend implementation of these interfaces rely on the movement simulation.

Movement in Game Worlds : Handling Movement

Handling Movement

The type of environment has a direct effect on the creatures that inhabit it. In fact, the type of movement is imposed upon the creatures by the environment.

Regardless of the dimensionality the environment, movement is handled in a similar fashion in both cases (2D or 3D). Although there are many possible ways of implementing this, a pattern arises in each of these designs: simulation loop, integration, and collision detection. The details classify as engine development rather than AI, but an overview of the principles will come in handy later.

The game engine has to take into account movement requests from all the players and creatures in the world. The physics system is in charge of resolving the collisions with the environment as well as with other entities. Each of the data structures representing in-game objects will then be updated accordingly, with new velocities, positions, and orientations.

To date, animation systems take care of low-level limb "control" as predefined animation cycles (for example, walk and run) are combined to move the player's entire body. Such animation cycles can be chosen based on the higher-level control of the players. Therefore, characters in the game are not actually walking (active behavior), they are merely animated (passive process). This animation system requires a minimal amount of coordination with the physics system to prevent breaking the illusion with inconsistencies (for instance, walking on the spot when stuck against walls, running in midair).

For all intent and purposes, the low-level aspects of movement are abstracted from the player—greatly simplifying the process. This underlying layer can be referred to as a locomotion layer. Computer-controlled characters can also rely on such locomotive capabilities. Indeed, handling the movement for AI and human players together is becoming commonplace in game development now; it's a good way to reduce the amount of code by inheriting AI classes from human-player classes (that is, with the same locomotive functions).

In the near future, there are prospects of the AI taking over low-level limb control, done in a fully dynamic fashion. For human players, controls are unlikely to change as they often prove complex enough. Humans will just have to adjust to having the AI control their character's legs in the game! For nonplayer characters (NPCs), the AI could theoretically control the entire body as a whole, dealing with locomotion at the same time as high-level requests. However, keeping the locomotion separate has many advantages (for example, simplicity and little redundancy), so it is advantageous to retain this distinction in the future.

Movement in Game Worlds : Types of Game Worlds

Types of Game Worlds

It's surprising how many games rely on movement as their most fundamental concept. They range from chiefly strategic logic games such as Chess or Diplomacy, or the more entertaining alternatives such as Horses or Downfall. Let's not forget computer games of course; first-person shooters, real-time strategy, or even role-playing games would be extremely dull without motion!

Despite virtual worlds being widespread, they come in many varieties. Notably, there are worlds of different dimensions (for instance, 2D or 3D), with different sizes and precision. Some worlds are based on grids (discrete), and some are free of restrictions (continuous).

Conceptually, there is a big difference between these variations, which translates into the feel and style of the gameplay. Behind the scenes in the engine, distinct data structures and implementation tricks are used in each case, but this discussion focuses on the consequences for the AI.

On Dimensions

Some games can take place in 2D levels like top-down worlds (Warcraft II), or side views with moving platforms and smooth scrolling (Super Mario Brothers). Alternatively, the world itself can be fully 3D, with different floors in buildings (Half Life), or even tunnels (Descent 3).

It's important to note that the dimensions of the game world are independent from the rendering. For example, the original Doom is in fact a 2D world with the floors at different levels. Even recent 3D first-person shooters have a lot in common with Doom. This leads to the assumption that the underlying properties of the environment are very similar. Indeed, much of the technology is applicable in 2D and 3D.

Most academic research projects deal with top-down 2D AI movement algorithms, for example [Seymour01], which provides a nice amalgamation of past research. The same goes for the most of the figures here; they show the problem projected onto the floor. This is primarily because 2D movement is simpler than its 3D counterpart.

As a justification, one often read that "2D generalizes to 3D," so it's fine to focus on the simpler alternative. With an algorithm defined for two dimensions, the same concept is re-applied again to an additional dimension! This is mostly the case, although special care needs to be taken to devise a solution that scales up with respect to the number of dimensions.

For movement in 3D games, this simplification is also acceptable. In realistic environments, gravity plays an important role. All creatures are on the floor most of the time, which almost simplifies the problem to a 2D one. It is not quite 2D because the floor surfaces can superimposed at different heights (for instance, a spiraling staircase); this type of world is often known as 2.5D, halfway between the complexity of two and three dimensions.

In the process of applying, or generalizing, our 2D solution to 3D, watch out for a couple of pitfalls:

  • Complex environments often exhibit tricky contraptions (for example, jump pads or lifts), which are quite rare in flat worlds.

  • Instead of naively applying an original algorithm, it can be tailored to make the most of the problem at hand.

These factors influence the design the solution.

Discrete Versus Continuous

Another property of game worlds is their precision, in both time and space. There are two different approaches, discrete and continuous events (in space or time), as shown in Figure 5.1. A discrete variable can accept only a finite number of values, whereas a continuous variable theoretically takes infinite values.

Figure 5.1. Two types of game environments observed top-down. On the left, space is continuous, whereas the world on the right is based on a grid (discrete).

Conceptually speaking, there is little difference between a continuous domain and a discrete one. Indeed, when the discretization is too fine to notice, the domain appears continuous. This doesn't apply mathematically, of course, but on modern computers, there are hardware limitations. To simulate continuous media, current software must select an appropriate discrete representation.

Using examples for space and time in game worlds shows how it's not only a game design issue, but also a fundamental element for the AI. (It's an independent AI design decision.)

Space

The game environment can be limited to cells of a grid; it's discrete if there are a finite number of grid coordinates. Continuous environments have unlimited locations because they are not restricted to grids.

To store the coordinates in the world, data types must be chosen. These are usually single precision floating-point numbers, which essentially allocate 32 bits to discretize those dimensions. Although there are benefits to using a double representation (64 bits), single representations are enough to make each object seem like it can take every possible position and orientation in space.

Time

Actions can be discrete in time, taking place at regular intervals—like in turn-based strategy games. Alternatively, actions can be continuous, happening smoothly through time as in fast-pace action games.

Similar floating-point data types can be chosen to represent this dimension. However, this choice is slightly more complicated because the programmer can't "manipulate" time as easily. Conceptually, single processors can only execute one logical process at a time—despite being able to perform multiple atomic operations in parallel. Consequently, it's not possible for each object in the world to be simulated continuously. The closest the computer can get is an approximation; a portion of code determines what happened since the last update (for example, every 0.1 seconds). So despite being very precise, a floating-point number will not be entirely necessary to represent a value in time (see Figure 5.2).

Figure 5.2. On the top, a continuous timeline with events happening at any point in time. On the bottom, the same timeline, but discrete. The time of each event is rounded to the closest mark every two seconds.


Conversions

Fundamentally, because of the limitations of computers, each of these dimensions can be considered as discrete—although it does not appear that way. Regardless, these data types can be converted to one another by simple mapping; this involves scaling, converting to an integer by rounding, and rescaling back. Arbitrary discretizations can thereby be obtained.

In practice, it's possible for the world to have one type of discretization and use a different discretization for the AI. For example, mapping continuous onto discrete domains enables us to exploit similarities in the AI design, and reuse existing AI routines (for instance, standard grid-based A* pathfinding). Of course, we may not necessarily want to perform the conversion for many reasons (for example, compromising behavior quality or sacrificing the complexity of environment). But at least AI engineers have the luxury to decide!