Search results
Publication Representation of Actions as an Interlingua(2000-04-01) Kipper, Karin; Palmer, MarthaWe present a Parameterized Action Representation (PAR) that provides a conceptual representation of different types of actions used to animate virtual human agents in a simulated 3D environment. These actions involve changes of state, changes of location (kinematic) and exertion of force (dynamic). PARs are hierarchical, parameterized structures that facilitate both visual and verbal expressions. In order to support the animation of the actions, PARs have to make explicit many details that are often underspecified in the language. This detailed level of representation also provides a suitable pivot representation for generation in other natural languages, i.e., a form of interlingua. We show examples of how certain divergences in machine translation can be solved by our approach focusing specifically on how verb-framed and satellite-framed languages can use our representation.Publication Consistent Communication with Control(2001-01-01) Allbeck, Jan; Badler, Norman IWe are seeking to outline a framework to create embodied agents with consistency both in terms of human actions and communications in general and individual humans in particular. Our goal is to drive this consistent behavior from internal or cognitive models of the agents. First, we describe channels of non-verbal communication and related research in embodied agents. We then describe cognitive processes that can be used to coordinate these channels of communication and create consistent behavior.Publication Toward A Human Behavior Models Anthology for Synthetic Agent Development(2001-05-01) Silverman, Barry G; Might, Robert; Dubois, Richard; Shin, Hogeun; Johns, Michael; Weaver, RansomThis paper describes an effort to foster the availability of Human Behavior Models / Performance Moderator Functions (HBM/PMFs) that the modeling and simulation community can use to increase the realism of their human behavior models. HBM/PMFs quantify the impact of human performance to internal and external stressors, and help to capture the role of personality and individual differences. To facilitate that process, we are creating a web-based anthology of HBM/PMFs that abstracts many 100s of them from diverse literatures, maps them into a taxonomy and common mathematical framework suitable for implementation, and assesses their validity and reuse issues. This paper reports on progress to date, anthology construction issues, and lessons learned.Publication Automating Gait Generation(2001-08-12) Sun, Harold; Metaxas, DimitrisOne of the most routine actions humans perform is walking. To date, however, an automated tool for generating human gait is not available. This paper addresses the gait generation problem through three modular components. We present ElevWalker, a new lowlevel gait generator based on sagittal elevation angles, which allows curved locomotion - walking along a curved path - to be created easily; ElevInterp, which uses a new inverse motion interpolation algorithm to handle uneven terrain locomotion; and MetaGait, a high-level control module which allows an animator to control a figure’s walking simply by specifying a path. The synthesis of these components is an easy-to-use, real-time, fully automated animation tool suitable for off-line animation, virtual environments and simulation.Publication Animation 2000++(2000-01-01) Badler, Norman IIn the next millennium, computer animation will be both the same as now and also very different. Animators will always have tools that allow specifying and controlling - through manual interactive interfaces - every nuance of shape, movement, and parameter settings. But whether for skilled animators or novices, the future of animation will present a fantastically expanded palette of possibilities: techniques, resources, and libraries for creating and controlling movements.Publication Generating Sequence of Eye Fixations Using Decision-theoretic Attention Model(2005-06-01) Gu, Erdan; Wang, Jingbin; Badler, Norman IHuman eyes scan images with serial eye fixations. We proposed a novel attention selectivity model for the automatic generation of eye fixations on 2D static scenes. An activation map was first computed by extracting primary visual features and detecting meaningful objects from the scene. An adaptable retinal filter was applied on this map to generate "Regions of Interest" (ROIs), whose locations corresponded to those of activation peaks and whose sizes were estimated by an iterative adjustment algorithm. The focus of attention was moved serially over the detected ROIs by a decision-theoretic mechanism. The generated sequence of eye fixations was determined from the perceptual benefit function based on perceptual costs and rewards, while the time distribution of different ROIs was estimated by a memory learning and decaying model. Finally, to demonstrate the effectiveness of the proposed attention model, the gaze tracking results of different human subjects and the simulated eye fixation shifting were compared.Publication Design of a Virtual Human Presenter(2000-08-01) Noma, Tsukasa; Badler, Norman I; Zhao, LiweiWe created a virtual human presenter based on extensions to the JackTM animated agent system. Inputs to the presenter system are in the form of speech texts with embedded commands, most of which relate to the virtual presenter's body language. The system then makes him act as a presenter with presentation skills in real-time 3D animation synchronized with speech outputs. He can make presentations with virtual visual aids, with virtual 3D environments, or even on the WWW.Publication Eyes Alive(2002-01-01) Badler, Norman I; Badler, Jeremy B; Lee, Sooha ParkFor an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have implemented an eye movement model based on empirical models of saccades and statistical models of eye-tracking data. Face animations using stationary eyes, eyes with random saccades only, and eyes with statistically derived saccades are compared, to evaluate whether they appear natural and effective while communicating.Publication FacEMOTE: Qualitative Parametric Modifiers for Facial Animations(2002-07-02) Badler, Norman I; Byun, MeeranWe propose a control mechanism for facial expressions by applying a few carefully chosen parametric modifications to preexisting expression data streams. This approach applies to any facial animation resource expressed in the general MPEG-4 form, whether taken from a library of preset facial expressions, captured from live performance, or entirely manually created. The MPEG-4 Facial Animation Parameters (FAPs) represent a facial expression as a set of parameterized muscle actions, given as intensity of individual muscle movements over time. Our system varies expressions by changing the intensities and scope of sets of MPEG-4 FAPs. It creates variations in “expressiveness” across the face model rather than simply scale, interpolate, or blend facial mesh node positions. The parameters are adapted from the Effort parameters of Laban Movement Analysis (LMA); we developed a mapping from their values onto sets of FAPs. The FacEMOTE parameters thus perturb a base expression to create a wide range of expressions. Such an approach could allow real-time face animations to change underlying speech or facial expression shapes dynamically according to current agent affect or user interaction needs.Publication To Gesture or Not to Gesture: What is the Question?(2000-06-19) Badler, Norman I; Costa, Monica; Zhao, Liwei; Chi, Diane MComputer synthesized characters are expected to make appropriate face, limb, and body gestures during communicative acts. We focus on non-facial movements and try to elucidate what is intended with the notions of "gesture" and "naturalness". We argue that looking only at the psychological notion of gesture and gesture type is insufficient to capture movement qualities needed by an animated character. Movement observation science, specifically Laban Movement Analysis and its Effort and Shape components with motion phrasing provide essential gesture components. We assert that the expression of movement qualities from the Effort dimensions are needed to make a gesture naturally crystallize out of abstract movements. Finally, we point out that non-facial gestures must involve the rest of the body to appear natural and convincing. A system called EMOTE has been implemented which applies parameterized Effort and Shape qualities to movements and thereby forms improved synthetic gestures.