Skip to main content

CameraBots: Cinematography for Games with Non-Player Characters as Camera Operators

Resource type
Date created
2005-06-01
Authors/Contributors
Abstract
Cinematography can be defined as the art of film making [1]. Among other things, it describes principles and techniques pertaining to the effective use of cameras to film live action. The correct application of these principles and techniques produces filmed content that is more engaging, compelling and absorbing for the viewer. 3D computer games employ virtual cameras in order to provide the player with an appropriate view of the game world. These virtual cameras can simulate all of the functionality of their real-world counterparts yet little effort is usually made to incorporate cinematographic techniques and principles into their operation. Typically, severe constraints are placed on the positioning of these cameras: for example a third-person camera is positioned at a fixed distance behind the player’s avatar (the character that the player controls), and a first person camera directly simulates the avatar’s viewpoint. The exception to this is the case of non-interactive cut-scenes where more sophisticated camera work is common. In this paper we describe our work on enabling the virtual camera in a 3D computer game to employ principles from cinematography throughout the game play. The successful employment of this approach can result in a more dramatic and compelling experience as the full arsenal of cinematic camera operations, such as close-ups, pans, tilts, zooms and so on, are potentially available. Cinematography provides guidelines as to how these can be used in order to make the viewer more engrossed in the action [5], and also advises how to employ consistent camera work to prevent the viewer from becoming disoriented, a common occurrence with current configurations in games. Certain camera angles or movements can be used to inform the viewer about imminent events (e.g. the camera may focus on a door when a person is about to walk through it) or to help them interpret the events on the screen. Conversely, for dramatic effect, certain events or parts of a scene can be hidden from view until the appropriate time. Cinematography achieves much of its effect by making appropriate cuts between different camera positions at the correct instances [1]. This presents an immediate problem as games typically rely on a single virtual camera and therefore it is not possible to make cuts. We solve this problem by introducing multiple cameras controlled by CameraBots, autonomous agents within a game whose role it is to film the action in much the same way that real camera operators do on a film set. These CameraBots are closely modelled on the existing Non-Player Characters (NPCs) [2, 3, 4, 6] found in most game engines. They can navigate around the game world but do not participate in the action, and hence are not rendered onscreen. Multiple CameraBots will typically be active at any instant during the gameplay, and the system can thus cut between the views that they provide. We describe five classes of CameraBot, each of which employs guidelines from cinematography in order to orient and position itself to accomplish a particular type of shot. The EstablishingCBot is designed to provide establishing shots for a particular scene. This involves filming from a sufficient distance and appropriate angle such that a good proportion of it and the characters in it are visible [5]. It is often used when the action moves to a new setting. The CharacterCBot shoots character shots which frame one or more characters. The CloseUpCBot shoots closer and more dramatic shots of a single character. The FirstPersonCBot films through the avatar’s eyes, as employed in first-person shooter games, and is used when the player requires close control and accuracy. The OTSCBot provides over-the-shoulder shots that follow the avatar when moving. In practice this means the bot is positioned directly behind the avatar and gives a good general view of what’s ahead and where the avatar is positioned in the setting. The CameraBots have various parameters which can be used to specify which events to film (or for the CharacterCBot and CloseUpCBot which characters to film) and what style, e.g. steady or hand-held, to use. In our implementation we are using the existing code that drives NPCs in the Quake II game engine to create our CameraBots. This provides us with an established method of adding artificially intelligent characters to a game and so we can harness functionality already present. In order to coordinate the CameraBots such that guidelines for shooting different types of scenes may be employed, we introduce two additional entities, the Director module and the Cinematographer module. The Director continually examines the game and uses criteria informed by cinematography to decide what action is to be filmed. These include whether or not the action being examined relates to the avatar (the protagonist from a cinematic point-of-view) and how much character interaction is occurring relative to that in other parts of the game. The Cinematographer examines the selected action and chooses a suitable method to use to film it. The Director may provide input into the choice of method. The role of the Cinematographer then involves introducing and removing CameraBots, telling them what to film, and cutting between the resultant views at the appropriate time. Of great importance is that the camera work produced does not prevent the game player from carrying out required tasks. We incorporate task specific information into our camera system to ensure this does not occur. We also consider providing views to game spectators in addition to players. In this instance it is possible to employ more concepts from cinematography since task-relevant views are not required. References 1. Brown, B. (2002). Cinematography: Image Making for Cinematographers, Directors and Videographers. Oxford: Focal. 2. Fairclough, C., Fagan, M., Mac Namee, B. and Cunningham, P. (2001). Research Directions for AI in Computer Games. Proceedings of the Twelfth Irish Conference on Artificial Intelligence and Cognitive Science pp. 333 – 344, 2001. 3. Laird, J.E. and Duchi, J.C. (2000). Creating Human-Like Synthetic Characters with Multiple Skill Levels: A Case Study Using the Soar Quakebot. AAAI tech. report, SS-00-03, AAAI Press, Menlo Park, Calif., 2000. 4. Laird, J. E. (2000). It Knows What You’re Going To Do: Adding anticipation to a QuakeBot. AAAI 2000 Spring Symposium on Artificial Intelligence and Interactive Entertainment. AAAI Technical Report SS00–02. Menlo Park, CA: AAAI Press. 5. Mascelli, J. V. (1965). The Five C’s of Cinematography. Los Angeles: Silman-James Press. 6. Reynolds, C. (1999). Steering Behaviors For Autonomous Characters. Game Developers Conference 1999.
Description
Contact: James Kneafsey, Insititute of Technology Blanchardstown, james.kneafsey@itb.ie
Copyright statement
Copyright is held by the author(s).
Language
English

Views & downloads - as of June 2023

Views: 0
Downloads: 0