Allbeck, Jan
Email Address
ORCID
Disciplines
Search Results
Now showing 1 - 10 of 11
Publication Automated Analysis of Human Factors Requirements(2007-02-01) Allbeck, Jan; Badler, Norman IComputational ergonomic analyses are often laboriously tested one task at a time. As digital human models improve, we can partially automate the entire analysis process of checking human factors requirements or regulations against a given design. We are extending our Parameterized Action Representation (PAR) to store requirements and its execution system to drive human models through required tasks. Databases of actions, objects, regulations, and digital humans are instantiated into PARs and executed by analyzers that simulate the actions on digital humans and monitor the actions to report successes and failures. These extensions will allow quantitative but localized design assessment relative to specific human factors requirements.Publication Towards Behavioral Consistency in Animated Agents(2000-11-30) Badler, Norman I; Allbeck, Jan M.We are seeking to outline a framework to create embodied agents with consistency both in terms of human actions and communications in general and individual humans in particular. Our goal is to drive this consistent behavior from internal or cognitive models of the agents.Publication Dynamically Altering Agent Behaviors Using Natural Language Instructions(2000-06-03) Allbeck, Jan M.; Bindiganavale, Ramamani; Badler, Norman I; Schuler, William; Joshi, Aravind K.; Palmer, MarthaSmart avatars are virtual human representations controlled by real people. Given instructions interactively, smart avatars can act as autonomous or reactive agents. During a real-time simulation, a user should be able to dynamically refine his or her avatar’s behavior in reaction to simulated stimuli without having to undertake a lengthy off-line programming session. In this paper, we introduce an architecture, which allows users to input immediate or persistent instructions using natural language and see the agents’ resulting behavioral changes in the graphical output of the simulation.Publication Being a Part of the Crowd: Towards Validating VR Crowds Using Presence(2008-05-12) Stocker, Catherine; Allbeck, Jan M.; Badler, Norman ICrowd simulation models are currently lacking a commonly accepted validation method. In this paper, we propose level of presence achieved by a human in a virtual environment (VE) as a metric for virtual crowd behavior. Using experimental evidence from the presence literature and the results of a pilot experiment that we ran, we explore the egocentric features that a crowd simulation model should have in order to achieve high levels of presence and thus be used as a framework for validation of simulated crowd behavior. We implemented four crowd models for our pilot experiment: social forces, rule based, cellular automata and HiDAC. Participants interacted with the crowd members of each model in an immersive virtual environment for the purpose of studying presence in virtual crowds, with the goal of establishing the basis for a future validation method.Publication Generating Plausible Individual Agent Movements From Spatio-Temporal Occupancy Data(2007-01-01) Sunshine-Hill, Ben; Allbeck, Jan M.; Badler, Norman I; Pelechano, NuriaWe introduce the Spatio-Temporal Agent Motion Model, a datadriven representation of the behavior and motion of individuals within a space over the course of a day. We explore different representations for this model, incorporating different modes of individual behavior, and describe how crowd simulations can use this model as source material for dynamic and realistic behaviors.Publication Evaluating American Sign Language Generation Through the Participation of Native ASL Signers(2008-05-01) Zhao, Liming; Gu, Erdan; Huenerfauth, Matt; Allbeck, Jan M.We discuss important factors in the design of evaluation studies for systems that generate animations of American Sign Language (ASL) sentences. In particular, we outline how some cultural and linguistic characteristics of members of the American Deaf community must be taken into account so as to ensure the accuracy of evaluations involving these users. Finally, we describe our implementation and user-based evaluation (by native ASL signers) of a prototype ASL generator to produce sentences containing classifier predicates, frequent and complex spatial phenomena that previous ASL generators have not produced.Publication Controlling Individual Agents in High-Density Crowd Simulation(2007-08-03) Allbeck, Jan M.; Pelechano, Nuria; Badler, Norman ISimulating the motion of realistic, large, dense crowds of autonomous agents is still a challenge for the computer graphics community. Typical approaches either resemble particle simulations (where agents lack orientation controls) or are conservative in the range of human motion possible (agents lack psychological state and aren’t allowed to ‘push’ each other). Our HiDAC system (for High-Density Autonomous Crowds) focuses on the problem of simulating the local motion and global wayfinding behaviors of crowds moving in a natural manner within dynamically changing virtual environments. By applying a combination of psychological and geometrical rules with a social and physical forces model, HiDAC exhibits a wide variety of emergent behaviors from agent line formation to pushing behavior and its consequences; relative to the current situation, personalities of the individuals and perceived social density.Publication Creating Crowd Variation with the Ocean Personality Model(2008-01-01) Allbeck, Jan M.; Badler, Norman IMost current crowd simulators animate homogeneous crowds, but include underlying parameters that can be tuned to create variations within the crowd. These parameters, however, are specific to the crowd models and may be difficult for an animator or naïve user to use. We propose mapping these parameters to personality traits. In this paper, we extend the HiDAC (HighDensity Autonomous Crowds) system by providing each agent with a personality model in order to examine how the emergent behavior of the crowd is affected. We use the OCEAN personality model as a basis for agent psychology. To each personality trait we associate nominal behaviors; thus, specifying personality for an agent leads to an automation of the low-level parameter tuning process. We describe a plausible mapping from personality traits to existing behavior types and analyze the overall emergent crowd behaviors.Publication Pedestrians: Creating Agent Behaviors through Statistical Analysis of Observation Data(2001-11-07) Ashida, Koji; Allbeck, Jan; Lee, Seung-Joo; Badler, Norman I; Sun, Harold; Metaxas, DimitrisCreating a complex virtual environment with human inhabitants that behave as we would expect real humans to behave is a difficult and time consuming task. Time must be spent to construct the environment, to create human figures, to create animations for the agents' actions, and to create controls for the agents' behaviors, such as scripts, plans, and decision-makers. Often work done for one virtual environment must be completely replicated for another. The creation of robust, procedural actions that can be ported from one simulation to another would ease the creation of new virtual environments. As walking is useful in many different virtual environments, the creation of natural looking walking is important. In this paper we present a system for producing more natural looking walking by incorporating actions for the upper body. We aim to provide a tool that authors of virtual environments can use to add realism to their characters without effort.Publication Real Time Virtual Humans(1997-04-01) Badler, Norman I; Allbeck, Jan M.; Bindiganavale, Ramamani; Bourne, Juliet C; Palmer, Martha S; Shi, JianpingThe last few years have seen great maturation in the computation speed and control methods needed to portray 3D virtual humans suitable for real interactive applications. Various dimensions of real-time virtual humans are considered, such as appearance and movement, autonomous action, and skills such as gesture, attention, and locomotion. A virtual human architecture includes low level motor skills, mid-level PaT-Net parallel finite-state machine controller, and a high level conceptual action representation that can be used to drive virtual humans through complex tasks. This structure offers a deep connection between natural language instructions and animation control.