Penn Engineering

The School of Engineering and Applied Science, established in 1852, is composed of six academic departments and numerous interdisciplinary centers, institutes, and laboratories. At Penn Engineering, we are preparing the next generation of innovative engineers, entrepreneurs and leaders. Our unique culture of cooperation and teamwork, emphasis on research, and dedicated faculty advisors who teach as well as mentor, provide the ideal environment for the intellectual growth and development of well-rounded global citizens.

Search results

Now showing 1 - 10 of 182
  • Publication
    Authoring Multi-Actor Behaviors in Crowds With Diverse Personalities
    (2013-01-01) Kapadia, Mubbasir; Shoulson, Alexander; Durupinar, Funda; Badler, Norman I
    Multi-actor simulation is critical to cinematic content creation, disaster and security simulation, and interactive entertainment. A key challenge is providing an appropriate interface for authoring high-fidelity virtual actors with featurerich control mechanisms capable of complex interactions with the environment and other actors. In this chapter, we present work that addresses the problem of behavior authoring at three levels: Individual and group interactions are conducted in an event-centric manner using parameterized behavior trees, social crowd dynamics are captured using the OCEAN personality model, and a centralized automated planner is used to enforce global narrative constraints on the scale of the entire simulation. We demonstrate the benefits and limitations of each of these approaches and propose the need for a single unifying construct capable of authoring functional, purposeful, autonomous actors which conform to a global narrative in an interactive simulation.
  • Publication
    Motion Planning for Redundant Branching Articulated Figures with Many Degrees of Freedom
    (1992-07-08) Ching, Wallace S.; Badler, Norman I
    A fast algorithm is presented that can handle the motion planning problem for articulated figures with branches and many degrees of freedom. The algorithm breaks down the degrees of freedom of the figure into Cspace groups and compute the free motion for each of these groups in a sequential fashion. It traverses the tree in a depth first order to compute the motion for all the branches. A special playback routine is then used to traverse the tree again in a reverse order to playback the final motion. The planner runs in linear time with respect to the total number of Cspace groups without backtracking. We believe that the planner would find a path in most cases and is fast enough for practical use in a wide range of applications.
  • Publication
    Virtual Human Animation Based on Movement Observation and Cognitive Behavior Models
    (1999-05-01) Badler, Norman I; Chi, Diane M.; Chopra-Khullar, Sonu
    Automatically animating virtual humans with actions that reflect real human motions is still a challenge. We present a framework for animation that is based on utilizing empirical and validated data from movement observation and cognitive psychology. To illustrate these, we demonstrate a mapping from Effort motion factors onto expressive arm movements, and from cognitive data to autonomous attention behaviors. We conclude with a discussion on the implications of this approach for the future of real-time virtual human animation.
  • Publication
    Simulated Casualties and Medics for Emergency Training
    (1997) Chi, Diane M; Kokkevis, Evangelos; Ogunyemi, Omolola; Bindiganavale, Ramamani; Hollick, Michael J; Clarke, John R; Webber, Bonnie L; Badler, Norman I
    The MediSim system extends virtual environment technology to allow medical personnel to interact with and train on simulated casualties. The casualty model employs a three-dimensional animated human body that displays appropriate physical and behavioral responses to injury and/or treatment. Medical corpsmen behaviors were developed to allow the actions of simulated medical personnel to conform to both military practice and medical protocols during patient assessment and stabilization. A trainee may initiate medic actions through a mouse and menu interface; a VR interface has also been created by Stansfield's research group at Sandia National Labs.
  • Publication
    Enhanced Collision Perception Using Tactile Feedback
    (2003-01-01) Bloomfield, Aaron; Badler, Norman I
    We used a custom designed tactor suit to provide full body vibrotactile feedback across the human arm for the purpose of enabling users to perceive a physical sense of collisions in a virtual world. We constructed a 3-D virtual environment to test arm reach movements. We present the results of human subject trials that test the benefit of using vibrotactile feedback for this purpose. Our preliminary results presented here show a small, but distinct, advantage with the use of tactors. With additional refinements to the system, improved performance results can be obtained.
  • Publication
    Pedestrians: Creating Agent Behaviors through Statistical Analysis of Observation Data
    (2001-11-07) Ashida, Koji; Lee, Seung-Joo; Allbeck, Jan; Sun, Harold; Badler, Norman I; Metaxas, Dimitris
    Creating a complex virtual environment with human inhabitants that behave as we would expect real humans to behave is a difficult and time consuming task. Time must be spent to construct the environment, to create human figures, to create animations for the agents' actions, and to create controls for the agents' behaviors, such as scripts, plans, and decision-makers. Often work done for one virtual environment must be completely replicated for another. The creation of robust, procedural actions that can be ported from one simulation to another would ease the creation of new virtual environments. As walking is useful in many different virtual environments, the creation of natural looking walking is important. In this paper we present a system for producing more natural looking walking by incorporating actions for the upper body. We aim to provide a tool that authors of virtual environments can use to add realism to their characters without effort.
  • Publication
    Ultrasound Guided Regional Anesthesia Training Simulator Using Microsoft Kinect
    (2014-01-01) Sairam, Aparajith; Feng, Jian; Badler, Norman I; Liu, Jiabin; Richman, Kenneth; Elkassabany, Nabil
    We present a system for the interactive simulation of ultrasound guided peripheral nerve blocks using a Microsoft Kinect®. The system performs motion tracking of both the ultrasound probe and the nerve block needle. Software generates synthetic ultrasound images from previously captured ultrasound images. Details of the software elements in the system are described. Some of the current challenges and future work in this research are discussed.
  • Publication
    Planning Approaches to Constraint-Aware Navigation in Dynamic Environments
    (2015-03-01) Ninomiya, Kai; Kapadia, Mubbasir; Shoulson, Alexander; Garcia, Francisco; Badler, Norman I
    Path planning is a fundamental problem in many areas, ranging from robotics and artificial intelligence to computer graphics and animation. Although there is extensive literature for computing optimal, collision-free paths, there is relatively little work that explores the satisfaction of spatial constraints between objects and agents at the global navigation layer. This paper presents a planning framework that satisfies multiple spatial constraints imposed on the path. The type of constraints specified can include staying behind a building, walking along walls, or avoiding the line of sight of patrolling agents. We introduce two hybrid environment representations that balance computational efficiency and search space density to provide a minimal, yet sufficient, discretization of the search graph for constraint-aware navigation. An extended anytime dynamic planner is used to compute constraint-aware paths, while efficiently repairing solutions to account for varying dynamic constraints or an updating world model. We demonstrate the benefits of our method on challenging navigation problems in complex environments for dynamic agents using combinations of hard and soft, attracting and repelling constraints, defined by both static obstacles and moving obstacles.
  • Publication
    Eyes Alive
    (2002-01-01) Badler, Norman I; Badler, Jeremy B; Lee, Sooha Park
    For an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have implemented an eye movement model based on empirical models of saccades and statistical models of eye-tracking data. Face animations using stationary eyes, eyes with random saccades only, and eyes with statistically derived saccades are compared, to evaluate whether they appear natural and effective while communicating.
  • Publication
    Building Anthropometry-Based Virtual Human Models
    (1994-06-01) Azuola, Francisco; Badler, Norman I; Ho, Pei-Hwa; Kakadiaris, Ioannis; Metaxas, Dimitris; Ting, Bond-Jay
    Creating realistic virtual humans requires models that resemble real humans both visually and behaviorally. Physical and behavioral fidelity in human modeling is the focus of research and development work at the University of Pennsylvania's Center for Human Modeling and Simulation. In this article, we briefly describe the Center's human modeling paradigm, Jack®, and our research activities, including Jack's Spreadsheet Anthropometric Scaling System (SASS) and its Free-Form Deformation model.