table of contents
how to order

(1) Semiotics: "...the study of patterned [...] communication behavior, including [...] signs and symbols."

Many computer applications deal with communication. Far over twenty years, the use of text processors has been common practice in office environments. The field of computer graphics, one of the dominant fields in computer science, deals with the role of computers in (visual) communication. The emphasis of much of this work has been on the techniques of image generation, and efficient graphical interaction for a variety of application domains. It is interesting to note that, in the computer science community, relatively little attention has been paid to the underlying semantics of the communicated pictorial messages. This may be partly understood from the fact that the semantics of a message, communicated by computers, is not very different from a similar message communicated without computer assistance. Semiotics (1) has developed largely as a separate discipline from computer science.

A second reason for the long-lasting mutual ignorance between computer science (and computer graphics in particular) and semiotics may be that it requires sophisticated computer algorithms to represent the semantics of images. Until quite recently, the algorithms of image generation in themselves were sufficiently challenging to absorb the majority of the efforts in the field. Also, computer architecture and programming sophistication were not sufficiently developed to deal with the complicated task of representing the meaning of the rendered images.

In the next few years there will be an ongoing research effort in mainstream computer graphics, focussing, e.g., on hardware architectures for real-time rendering and increasingly advanced paradigms for geometric modeling and virtual reality. However, there is also a relatively new trend within computer graphics in which visual communication and interaction are indeed tied in with an explicit attention for the meaning of the communicated images. This relatively young branch of computer science is called Computational Visualistics.


In order to understand the necessity of linking the visual aspects of an image with its meaning, which is one of the starting points of computational visualistics, it may be illustrative to study an example first. Let us look at the evolution of the Western alphabet. Although the alphabet developed separately from the notion of computers, it is instructive to become aware of some of the issues regarding the representation of meaning that underly (dealing with) images -- even if these images are just the individual characters of the alphabet.

In the Western alphabet, every letter is a little drawing, a distribution of dark on a light background, the result of many centuries of gradual transformations and artistic font design originating in ancient pictograms. For instance, the "A" shape apparently originated in an Egyptian hieroglyph of an eagle (ahom) in cursive hieratic writing.

A naturalistic drawing of an eagle is capable of conveying many anatomical attributes of an individual bird. However, in some cases the drawing does not refer to an individual bird, but to the more abstract notion of "an eagle" in general, for instance in a pamphlet that has the purpose of instructing shepherds to watch out for eagles. Now the artist responsible for drawing this "generic eagle" faces a conceptual difficulty. Which instance of the class "eagle" should he choose? To give a full account of the potential danger of eagles for sheep, the artist really should refer to all possible eagles in all possible postures. But it is clearly impossible to render this multitude of drawings.

(2) note: not only all existing eagles, but even all possible eagles!

Interestingly, it is not only impossible to draw all individual postures of all possible eagles, it is also unnecessary. In virtue of a powerful but mysterious mental process in the observer's mind, a drawing of one particular eagle, in one particular posture will automatically associate with the class of all possible eagles (2) in all possible postures. This process, which we will call implicit generalization is a peculiar phenomenon. Implicit generalization is nothing that has to be negotiated between the artist and the observer. It occurs automatically, and it is probably one of the reasons that human beings do rather well in an environment where successful recognition and classification of similar, but different things (food, enemies, mates, etc.) is critical in order to survive.

(3) We could call such impressions "photorealistic", although in this context the invention of photography was still some 3900 years in the future and the usage of the word "photorealistic" in the sense now used by computer graphics practitioners yet another hundred years later ...

In the case of pictorial artifacts, however, implicit generalization takes on more baroque forms. One reason for this is that not all artists are capable of the perfect impressions (3) we assumed above. And even a skilled artist is sometimes in a hurry. So undoubtedly, less perfect drawings of eagles have been around, some with wrong proportions, some with few details, indeed, some highly schematic.

Implicit generalization in the case of the pragmatic meaning of an eagle as a potentially dangerous large bird of prey may carry over to other species, such as falcons or buzzards. But there is a limit to the scope of this generalization. The abstraction will probably never include larks or blackbirds, or leopards or wolfs.

The implicit generalization of the pragmatic meaning of pictures, however, is much more "contagious". First, this may be due to the imperfection of the picture. If the artist leaves out sufficient details the observer may be able to make a successful mental match between the picture and all creatures with wings, including lark and blackbirds, but also bats and dragon-flies. Or even with all roughly V-shaped objects, depending on the simplicity of the picture.

But implicit generalization does not even stop here. After some time, the schematic picture of the eagle may also refer to the word "eagle" instead of the object "eagle", and somewhat later to the sound of the word "eagle", and even later to the sound of the first letter of this word. And we have to realize that all these abstractions (or rather changes in the communication code) probably did not result from meticulously negotiated agreements between the artist and the observer. On the contrary, they most likely occurred largely unnoticed, and it would be interesting to know how much confusion was caused during the process.

Apart from implicit generalizations, and other highly confusing meaning-transformers, there is a second process going on that complicates the relation between pictures and the real world. The pictures themselves also seem to evolve according to their own, hidden laws, of which "changing aesthetics" and "artistic innovation" are merely the least mysterious ones. As a result, the implicitly generalized eagle has not only evolved into an "A", but also an , an , an , an , and an and to many hundred more distributions of dark on a light background.

(4) The amount of computing power available per dollar's worth of hardware doubles every eighteen months.

Over the last 4000 years, the process of creating and transforming pictures and endowing them with evolving meaning, either concrete or abstract or a mixture of the two, has been a process of gradual evolution interspersed, every now and then, with a quantum leap (the introduction of phonetic alphabets, the invention of perspective, the usage of schematic drawings, the printing process, photography, impressionism, expressionism, abstract painting, ...). Each time, undoubtedly, there was some confusion after each jump, but the time lapse before the next one mostly allowed sufficient habituation to the new communication codes. Until the last couple of decades of the second millennium AD, when computers came along and everything started evolving in accordance with Moore's law (4).

In particular, the developments in computer graphics have caused a tremendous increase in the number of available options for visualizing both concrete and abstract information. But the focus of this work has been predominantly on the picture-making process (to be more precise: on the process of algorithmically imitating the physics of photography), and the ties between the pictures and their underlying semantics, as well as alternative strategies for picture generation, have been largely ignored. One lesson to be learned from our small thought experiments on the history of semiotics, outlined above, is that confusion is to be expected if the relation between pictures and their meaning is left unspecified, and new communication codes are introduced before the previous ones have been established.

Visual Representation and Meaning

As we mentioned in the introduction, the new research field of computational visualistics attempts to fill in some of the blank areas that have been left open by mainstream computer graphics research. The central theme in most of the research in computational visualistics has to do with the relation between the visual representation of images and their meaning. This relation is really two-directional:
  1. The (visual) contents of an image should be such that this meaning is optimally communicated: the meaning dictates (part of) the pictorial contents;
  2. Given the contents of an image, additional (non-graphical) communication modes can be used to support conveyance of its meaning: the pictorial contents dictate non-graphical attributes.
In the first category, there is a rich tradition, dating back to the pre-computer era, where non-photorealistic imaging techniques have been used. In fact, our short excursion to the origin of the Western alphabet is an example. Other examples are simplified drawings, or drawings in which deliberate alterations with respect to the depicted object(s) have been introduced. These drawings are often used in the context of teaching or instruction. The alterations may range from subtle modifications (e.g., local adjustments to the line style, size, or amount of detail), to completely symbolic renderings (organization charts, graphs). Halfway between these extremes we find geographic maps, architectural drawings and sketches, and so on. In many cases, the designer of these types of images is not, in an algorithmic sense, aware of the types of alterations he or she applies. Therefore, in order to program computers to render these types of images, one first has to carefully analyze and formalize the process of non-photorealistic picture-making. It has to be fully understood what choices a human artist would make when choosing levels of detail, line styles, hatching, and so on, as a function of the structure and meaning of the object or scene to depict. Next, there is the more technical challenge of formalizing these choices, and translating them into suitable algorithms.

With respect to the second category, we observe that in some cases even a photorealistic picture on its own is not sufficient to convey the intended meaning. An example would be where the picture is meant to be understood by a visually handicapped person; another example would be a situation in which non-graphical references to the objects in the picture are required (e.g., from an associated text). In both cases, a non-pictorial annotation is required, either in the form of a full non-graphical representation (e.g., a 1-D graph that is translated to a varying sound over time, or a tactile map), or in the form of well-placed textual labels or symbolic icons. In many applications we are confronted with the challenge of automating the process of non-graphical annotation, based on a formal representation of the picture's contents.

The two research fields above can be seen to enhance, extend, and build on traditional techniques in image making. Although this is nowadays much too laborious for large volume production, a human artist could in principle generate many of the visual effects discussed above. There are further research fields, however, where computer graphics have introduced drastically new paradigms in non-photorealistic image production. These are techniques such as selective zooming (enlarging those portions of an image of a 3-D model that are of current interest); manipulating the perspective of such images; (non-photorealistic) animation; synthetic holograms, and others. These are all techniques that only have become available over the last few years.


This book discusses in depth many of the research fields outlined above. It is an anthology of some of the results that have been obtained in Computational Visualistics in the University of Magdeburg over the last two years. Apart from the direct relevance of the methods and techniques, described in the sequel, much of the merit of this work lies in the dawning awareness that effective visual communication critically relies on both the pictorial and the semantic aspects of the communicated picture.

It is my sincere wish that this awareness will continue to inspire fruitful and groundbreaking research.

Eindhoven, The Netherlands, August 1998

Kees van Overveld

Ronny Schulz, last modified: