How do we explain a picture to another person? We talk about the picture, describe the colors, shapes, and objects in it, mention how different objects are related to each other.
How do we explain a verbal statement? We show a picture which visualizes the content of the utterance, the objects mentioned in it and how they are related. In everyday communication people use various ways in parallel in order to transmit their intention.
They point on something, put on a special face, gesticulate, or refer to the common environment of the communication partners. They use different modalities in order to communicate. It seems to be just natural to use the same way of interaction in humancomputer-interfaces. The consequence is a paradigm shift from passive interfaces, such as mouse clicks or text typing, to an active communication partner that interprets the auditive and visual environment, draws inferences using background knowledge, and requests missing information. Subsequently, such an active human-computer-interface will be called artificial communicator.
However, the automatic interpretation of signals of a separate input modality, such as speech understanding, gesture recognition, or visual object recognition are only one part of the total. In order to build systems which communicate with people in a natural way, the integration of modalities is an essential task that is not trivial. Each modality has its own vocabulary and expressiveness. Pointing defines a region or direction of interest,
a special face may represent an emotional feeling, speech understanding provides qualitative facts about the world, and vision perceives and interprets analogous shapes in the world. I think it is not questionable that different formalisms are needed for processing different modalities, and, indeed, this is the fact in the current state of the art (see Sec. 2.2,2.3). The question is, and this thesis will be an experimental study in this topic, what is the most promising formalism to integrate the results of the specialized processing components of such a multi-modal system or artificial communicator? How should the individual components of the system be connected, and how should the processing be
organized? This thesis will give an innovative answer to these questions and present a realization in a particular domain.