Technical Reports (CIS)

Penn Engineering is the birthplace of the modern computer. It was here that the ENIAC, the world's first electronic, large-scale, general-purpose digital computer, was developed in 1946. Since this auspicious beginning more than five decades ago, the field of computer science at Penn has been marked by exciting innovations. Over the last few years, Penn CIS has grown in algorithms, theory of computation, networking, systems and architecture, and artificial intelligence. We are building on these successes to strengthen work on databases, graphics, programming languages, and security, and to deepen our interdisciplinary work in such areas as bioinformatics, cognitive science, robotics, and management.

 

 

 

 

 

Search results

Now showing 1 - 10 of 1001
  • Publication
    Generation, Estimation and Tracking of Faces
    (1998) DeCarlo, Douglas M
    This thesis describes techniques for the construction of face models for both computer graphics and computer vision applications. It also details model-based computer vision methods for extracting and combining data with the model. Our face models respect the measurements of populations described by face anthropometry studies. In computer graphics, the anthropometric measurements permit the automatic generation of varied geometric models of human faces. This is accomplished by producing a random set of face measurements generated according to anthropometric statistics. A face fitting these measurements is realized using variational modeling. In computer vision, anthropometric data biases face shape estimation towards more plausible individuals. Having such a detailed model encourages the use of model-based techniques—we use a physics-based deformable model framework. We derive and solve a dynamic system which accounts for edges in the image and incorporates optical flow as a motion constraint on the model. Our solution ensures this constraint remains satisfied when edge information is used, which helps prevent tracking drift. This method is extended using the residuals from the optical flow solution. The extracted structure of the model can be improved by determining small changes in the model that reduce this error residual. We present experiments in extracting the shape and motion of a face from image sequences which exhibit the generality of our technique, as well as provide validation.
  • Publication
    Building Parameterized Action Representations From Observation
    (2000-01-01) Bindiganavale, Ramamani
    Virtual worlds may be inhabited by intelligent agents who interact by performing various simple and complex actions. If the agents are human-like (embodied), their actions may be generated from motion capture or procedural animation. In this thesis, we introduce the CaPAR interactive system which combines both these approaches to generate agent-size neutral representations of actions within a framework called Parameterized Action Representation (PAR). Just as a person may learn a new complex physical task by observing another person doing it, our system observes a single trial of a human performing some complex task that involves interaction with self or other objects in the environment and automatically generates semantically rich information about the action. This information can be used to generate similar constrained motions for agents of different sizes. Human movement is captured by electromagnetic sensors. By computing motion zerocrossings and geometric spatial proximities, we isolate significant events, abstract both spatial and visual constraints from an agent's action, and segment a given complex action into several simpler subactions. We analyze each independently and build individual PARs for them. Several PARs can be combined into one complex PAR representing the original activity. Within each motion segment, semantic and style information is extracted. The style information is used to generate the same constrained motion in other differently sized virtual agents by copying the end-effector velocity profile, by following a similar end-effector trajectory, or by scaling and mapping force interactions between the agent and an object. The semantic information is stored in a PAR. The extracted style and constraint information is stored in the corresponding agent and object models.
  • Publication
    Parsing With Lexicalized Tree Adjoining Grammar
    (1990-02-01) Schabes, Yves; Joshi, Aravind K
    Most current linguistic theories give lexical accounts of several phenomena that used to be considered purely syntactic. The information put in the lexicon is thereby increased in both amount and complexity: see, for example, lexical rules in LFG (Kaplan and Bresnan, 1983), GPSG (Gazdar, Klein, Pullum and Sag, 1985), HPSG (Pollard and Sag, 1987), Combinatory Categorial Grammars (Steedman, 1987), Karttunen's version of Categorial Grammar (Karttunen 1986, 1988), some versions of GB theory (Chomsky 1981), and Lexicon-Grammars (Gross 1984). We would like to take into account this fact while defining a formalism. We therefore explore the view that syntactical rules are not separated from lexical items. We say that a grammar is lexicalized (Schabes, AbeilK and Joshi, 1988) if it consists of: (1) a finite set of structures each associated with lexical items; each lexical item will be called the anchor of the corresponding structure; the structures define the domain of locality over which constraints are specified; (2) an operation or operations for composing the structures. The notion of anchor is closely related to the word associated with a functor-argument category in Categorial Grammars. Categorial Grammar (as used for example by Steedman, 1987) are 'lexicalized' according to our definition since each basic category has a lexical item associated with it.
  • Publication
    Realizability, Covers, and Sheaves I. Application to the Simply-Typed Lambda-Calculus
    (1993-08-12) Gallier, Jean H
    We present a general method for proving properties of typed λ-terms. This method is obtained by introducing a semantic notion of realizability which uses the notion of a cover algebra (as in abstract sheaf theory, a cover algebra being a Grothendieck topology in the case of a preorder). For this, we introduce a new class of semantic structures equipped with preorders, called pre-applicative structures. These structures need not be extensional. In this framework, a general realizability theorem can be shown. Kleene's recursive realizability and a variant of Kreisel's modified realizability both fit into this framework. Applying this theorem to the special case of the term model, yields a general theorem for proving properties of typed λ-terms, in particular, strong normalization and confluence. This approach clarifies the reducibility method by showing that the closure conditions on candidates of reducibility can be viewed as sheaf conditions. Part I of this paper applies the above approach to the simply-typed λ-calculus (with types →, ×, +, and ⊥). Part II of this paper deals with the second-order (polymorphic) λ-calculus (with types → and ∀).
  • Publication
    Update Exchange With Mappings and Provenance
    (2007-11-27) Green, Todd J; Karvounarakis, Grigoris; Ives, Zachary G; Tannen, Val
    We consider systems for data sharing among heterogeneous peers related by a network of schema mappings. Each peer has a locally controlled and edited database instance, but wants to ask queries over related data from other peers as well. To achieve this, every peer’s updates propagate along the mappings to the other peers. However, this update exchange is filtered by trust conditions — expressing what data and sources a peer judges to be authoritative — which may cause a peer to reject another’s updates. In order to support such filtering, updates carry provenance information. These systems target scientific data sharing applications, and their general principles and architecture have been described in [21]. In this paper we present methods for realizing such systems. Specifically, we extend techniques from data integration, data exchange, and incremental view maintenance to propagate updates along mappings; we integrate a novel model for tracking data provenance, such that curators may filter updates based on trust conditions over this provenance; we discuss strategies for implementing our techniques in conjunction with an RDBMS; and we experimentally demonstrate the viability of our techniques in the Orchestra prototype system. This technical report supersedes the version which appeared in VLDB 2007 [17] and corrects certain technical claims regarding the semantics of our system (see errata in Sections [3.1] and [4.1.1]).
  • Publication
    The Soundness and Completeness of ACSR (Algebra of Communicating Shared Resources)
    (1993-11-01) Brémond-Grégoire, Patrice; Choi, Jin-Young; Lee, Insup
    Recently, significant progress has been made in the development of timed process algebras for the specification and analysis of real-time systems; one of which is a timed process algebra called ACSR. ACSR supports synchronous timed actions and asynchronous instantaneous events. Timed actions are used to represent the usage of resources and to model the passage of time. Events are used to capture synchronization between processes. To be able to specify real systems accurately, ACSR supports a notion of priority that can be used to arbitrate among timed actions competing for the use of resources and among events that are ready for synchronization. Equivalence between ACSR terms is defined in terms of strong bisimulation. The paper contains a set of algebraic laws that are proven sound and complete for finite ACSR agents.
  • Publication
    Viral Marketing and the Diffusion of Trends on Social Networks
    (2008-05-15) Wortman, Jennifer
    We survey the recent literature on theoretical models of diffusion in social networks and the application of these models to viral marketing. To put this work in context, we begin with a review of the most common models that have been examined in the economics and sociology literature, including local interaction games, threshold models, and cascade models, in addition to a family of models based on Markov random fields. We then discuss a series of recent algorithmic and analytical results that have emerged from the computer science community. The first set of results addresses the problem of influence maximization, in which the goal is to determine the optimal group of individuals in a social network to target with an advertising campaign in order to cause a new product or technology to spread throughout the network. We then discuss an analysis of the properties of graphs that allow or prohibit the widespread propagation of trends.
  • Publication
    Revision of QoS Guarantees at the Application/Network Interface
    (1993-03-01) Nahrstedt, Klara; Smith, Jonathan M
    Connection management based on Quality of Service (QoS) offers opportunities for better resource allocation in networks providing service classes. "Negotiation" describes the process of cooperatively configuring application and network resources for an application's use. Complex and long-running applications can reduce the inefficiencies of static allocations by splitting resource use into "eras" bounded by renegotiation of QoS parameters. Renegotiation can be driven by either the application or the network in order to best match application and network dynamics. A key element in this process is a translation between differing perspectives on QoS maintained by applications and network service provision. We model translation with an entity called a "broker".
  • Publication
    Sideways Information Passing for Push-Style Query Processing
    (2007-11-20) Ives, Zachary G; Taylor, Nicholas E
    In many modern data management settings, data is queried from a central node or nodes, but is stored at remote sources. In such a setting it is common to perform "push-style" query processing, using multithreaded pipelined hash joins and bushy query plans to compute parts of the query in parallel; to avoid idling, the CPU can switch between them as delays are encountered. This works well for simple select-project-join queries, but increasingly, Web and integration applications require more complex queries with multiple joins and even nested subqueries. As we demonstrate in this paper, push-style execution of complex queries can be improved substantially via sideways information passing; push-style queries provide many opportunities for information passing that have not been studied in the past literature. We present adaptive information passing, a general runtime decisionmaking technique for reusing intermediate state from one query subresult to prune and reduce computation of other subresults. We develop two alternative schemes for performing adaptive information passing, which we study in several settings under a variety of workloads.
  • Publication
    Quotient Lenses
    (2009-02-10) Foster, J. Nathan; Pilkiewicz, Alexandre; Pierce, Benjamin C
    There are now a number of bidirectional programming languages, where every program can be read both as a forward transformation mapping one data structure to another and as a reverse transformation mapping an edited output back to a correspondingly edited input. Besides parsimony—the two related transformations are described by just one expression— such languages are attractive because they promise strong behavioral laws about how the two transformations fit together—e.g., their composition is the identity function. It has repeatedly been observed, however, that such laws are actually a bit too strong: in practice, we do not want them “on the nose,” but only up to some equivalence, allowing inessential details, such as whitespace, to be modified after a round trip. Some bidirectional languages loosen their laws in this way, but only for specific, baked-in equivalences. In this work, we propose a general theory of quotient lenses—bidirectional transformations that are well behaved modulo equivalence relations controlled by the programmer. Semantically, quotient lenses are a natural refinement of lenses, which we have studied in previous work. At the level of syntax, we present a rich set of constructs for programming with canonizers and for quotienting lenses by canonizers. We track equivalences explicitly, with the type of every quotient lens specifying the equivalences it respects. We have implemented quotient lenses as a refinement of the bidirectional string processing language Boomerang. We present a number of useful primitive canonizers for strings, and give a simple extension of Boomerang’s regular-expression-based type system to statically typecheck quotient lenses. The resulting language is an expressive tool for transforming real-world, ad-hoc data formats. We demonstrate the power of our notation by developing an extended example based on the UniProt genome database format and illustrate the generality of our approach by showing how uses of quotienting in other bidirectional languages can be translated into our notation.