Steedman, Mark

Email Address
ORCID
Disciplines
Research Projects
Organizational Units
Position
Introduction
Research Interests

Search Results

Now showing 1 - 4 of 4
  • Publication
    Automatically Generating Conversational Behaviors in Animated Agents
    (1994-10-04) Cassell, Justine; Badler, Norman I; Pelachaud, Catherine; Steedman, Mark
    In the creation of synthetic computer characters, the creators shouldn't have to create or control every move of their life like human agents: for example, during the progress of a search or planning system, responding to knowledge based queries, or portraying autonomous agents during real-time virtual environment simulations. For these automated characters we must generate behavior on the basis of rules abstracted from the study of human behavior.
  • Publication
    Synthesizing Cooperative Conversation
    (1996-07-05) Pelachaud, Catherine; Badler, Norman I; Cassell, Justine; Steedman, Mark; Prevost, Scott; Stone, Matthew
    We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators.
  • Publication
    Generating Facial Expressions for Speech
    (1996) Badler, Norman I; Pelachaud, Catherine; Steedman, Mark
    This paper reports results from a program that produces high quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end we have produced a high level programming language for 3D animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: this includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus”, “topic” and “comment”, “theme” and “rheme”, or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect and facial expressions/affect. A meaning representation includes discourse information: what is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse. The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators or manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models.
  • Publication
    Linguistic Issues in Facial Animation
    (1991-06-01) Steedman, Mark; Pelachaud, Catherine; Badler, Norman I
    Our goal is to build a system of 3D animation and facial expressions of emotion correlated with the intonation of the voice. Up till now, the existing systems did not take into account the link between these two features. We will look at the rules that control these relations (intonation/emotions and facial expressions/emotions) as well as the coordination of these various modes and expressions. Given an utterance, we consider how the messages (what is new/old information in the given context) transmitted through the choice of accents and their placement, are conveyed through the face. The facial model integrates the action of each muscle or groups of muscles as well as the propagation of the muscles' movement. Our first step will be to enumerate and to differentiate facial movements linked to emotions as opposed to those linked to conversation. Then, we will examine what the rules are that drive them and how their different functions interact.