English synopsis by Barbara Flueckiger

Visual Effects. Filmbilder aus dem Computer

--> German synopsis | --> Timeline of VFX inventions

Download English translation of the chapter "Digital Characters" free of charge.


In the wake of William J. Mitchell´s influential publication The Reconfigured Eye (1992), film and media scholars consider the post-photographic era mainly from a skeptical perspective. A confused and speculative discussion has emerged which centers around the ethical question of visual truth. While this notion may be useful in the context of the documentary, the fiction film has developed its own rules of representation. This study therefore aims at investigating the neglected aspects of computer-generated images (CGI) in the context of the fiction film. Based on a detailed look at the technical procedures of digital image production - modeling, texturing, animation, lighting and rendering - it discusses the aesthetic and narrative consequences of the new mode of production through an analysis of several hundred films which integrate CGI into photochemically-produced material using digital compositing. Since its inception, film production has developed a broad range of so-called special effects which can be regarded as precursors of digital visual effects. The study therefore also takes the film-historical perspective into account, by analyzing to what degree digital techniques have led to a change of aesthetic and narrative standards, or whether they submit to the culturally determined codes which have been defined in the course of more than 100 years of film production.

With Frank Beau I call my approach "technobole", i.e. an investigation into the theoretical and epistemological foundations of the technology. My goal is to understand and to explain the heterogeneous theoretical assumptions which underpin technical strategies for constructing digital visual effects. In a broader context, my approach can be linked to the historical poetics of film developed by David Bordwell and the Wisconsin project. Beyond the close analysis of the films, I conducted interviews with film practitioners and computer graphic experts, and studied a great number of technical papers as well as production notes published in the journals Cinefex and American Cinematographer. Thus the book brings together a very broad range of information. It is roughly divided into three parts: technology, representation, and narration.

The first part explores the various techniques as they are employed in the production process.

Digital Images: Properties deals with fundamental aspects of the various types of digital image production. While they differ greatly with regard to their aesthetics and the processes of their production, they certainly share some common traits: namely, the quantization and the binary coding of the data. By way of this coding they form part of a universal digital ecosystem which allows for transmission - the interchangeability of data in diverse media - and transformation.

Modeling describes various techniques for constructing 3D objects and scenes - polygon, procedural and image-based modeling as well as 3D scanning - and examines their use and properties in certain films, while at the same time delivering insights into the fundamental differences between these processes.

However even more important than the mere shape of objects are their surfaces and materials, as is shown in the corresponding chapter. In the history of CGI the evolution from plastic and metal to complex and/or organic materials like fur, water or fabrics is strikingly apparent and has limited the aesthetic range for decades. Less obvious are the basic principles of material properties in CGI which are explored in this chapter as well.

As computer animation relies heavily on foundations which were developed in the area of classical animation - cell and stop-motion animation - the chapter Animation starts with a historical overview before investigating different approaches such as keyframe animation, procedural animation, motion capture, and their respective scopes and uses.

While most lighting is informed by schemes stemming from actual lighting in live-action production, image-based lighting directly imports data from high dynamic range photographs into computer generated scenes. With the investigation of the rendering process a range of epistemological questions arises; these are discussed in connection with the ideas of the German media scholar Friedrich Kittler, who has proposed to simply convert Richard Feynmanl´s Lectures on Physics into software in order to obtain perfect results. As can be shown however, the situation is much more complex. Therefore all the rendering algorithms - raytracing, radiosity and photon mapping to name a few - are based on approximations in accordance with the theorems of ray optics instead of more recent quantum-physical theories.

Compositing has a longstanding tradition in the photochemical realm, where combination processes such as matte paintings, rear projection or traveling mattes have been used since the early stages of film history. Beyond a discussion of theoretical reflections on the subject as provided by media scholars such as Lev Manovich and William J. Mitchell, this chapter investigates three main topics, namely the combination of different layers, the interaction between the various image elements, and their aesthetic coherence.

The middle part - Theory of Representation - embeds the technological foundations provided in the first part in the broader philosophical context of representation in the arts as well as in cinematic fiction. Most importantly it classifies the various technological approaches into four categories: modeling processes, recording, painting, and measurement with an emphasis on the first two. Modeling processes are based on mathematical descriptions; they are abstract by their very nature and they are open to free imagination. Recording, on the other hand, is a translation process of physical data from the real world according to a specified protocol; it is therefore restricted to an indexical relationship between the representation and its object. Finally the last paragraph of the chapter analyzes the relevance and meaning of a variety of analog artifacts - i.e. transformations of the photographic image like grain, motion blur, diffusion or lens flares - which are emulated with digital means in CGI as an essential aspect of photorealism.

In the third and last part of the book some relevant narrative patterns are put under scrutiny.

Dimensions and Layers starts with a discussion of the relationship between technological innovation and knowledge production. One of the recurring distinctions in the role of visual effects is the dichotomy between visibility and invisibility. This distinction is of importance not only from a perceptional perspective but also as a narrative category, because CGI offer a vast range of visualization strategies. These vizualizations convey dimensions beyond the scope of human perception, imagination, thought or magical phenomena. Magic is not only a content of representation but refers to the technology itself, providing it with a mythic enhancement. This strategy is especially evident in the making-of documentaries which notoriously depict VFX artists as wizards transgressing the borders of natural restrictions and entering uncharted territories. Finally, it can be shown that many heterogeneous forms of representation have emerged which act as mise-en-abyme, thereby reflecting on parts of the primary narrative or importing higher degree semantics with intermedial references.

Finally the chapter Digital Characters starts with a history of synthespians in film. Furthermore it investigates fundamental problems of digital character construction, namely problems of character consistency, the modeling of complexity and the interaction of digital characters with live-action protagonists. Character consistency is essential for the perception of the figures' identity. Practitioners make use of the popular theory of the uncanny valley proposed by Masahiro Mori in 1970 in the context of robotics. Mori states that the more an artificial character appears to be human, the more it evokes emotions, but just before appearing fully human an alienating effect occurs which is called the "uncanny valley". While this theory is applicable to a range of synthespians, there are examples which do not fit into this scheme. Therefore I have developed an alternative called the distance model. It assumes that different aspects of appearance and behavior should be located at a similar distance to a fully transparent, seemingly natural mode of representation. In a case study of Gollum from the LORD OF THE RINGS trilogy - certainly one of the most convincing digital characters to date - it is shown how a digital character succeeds in arousing emotions. The chapter concludes with an investigation into the superhero problem. Since digital characters defy the laws of physics, they seem to have unlimited possibilities which might lead to the breakdown of the audiencel´s empathy.

The Final Remarks which complete the study deliver a summary of the historical development of CGI in film production. It becomes evident that the new technologies evolved outside of the Hollywood system in academic institutions and small companies producing music videos and commercials. A close look at the transition reveals that a complex system of diverse forces was at work, which denies any monocausal explanation. The construction of seemingly plausible fictional worlds is in fact highly determined by culturally established patterns. Therefore, despite the notorious rhetoric which presents the transition as a revolution, it becomes evident that to this day the mainstream film as a mass-cultural phenomenon prefers a moderate change.

An extensive Appendix contains a vast bibliography, a glossary and a film index.

Unique Selling Points

The literature on digital visual effects can be divided into two main categories. Publications either target a readership of practitioners and are therefore instructional manuals, or they emerge in the field of film and media studies and neglect the technological aspects to a high degree. "Visual Effects" is the first book which connects deep insights into the technology with a theoretical reflection on epistemological, aesthetic and narrative facets in conjunction with a bottom-up analysis of a vast group of films. Its scope is to translate technological knowledge for the humanities, in order to investigate its consequences on form and content from an interdisciplinary perspective. Each chapter is self-contained by integrating all the necessary information, which makes it possible to read individual chapters only. Cross references and a glossary help to close possible gaps. Thus "Visual Effects" is a perfect textbook for academic courses.

The book aims at a large target readership consisting of:
Film and media scholars
Film practitioners
Interested laypersons

Barbara Flückiger (2008): Visual Effects. Filmbilder aus dem Computer.

Marburg: Schüren. 528 pp., in color
ISBN 978-3-89472-518-1 EUR 38.00