Guérin, Roch A

Email Address
ORCID
Disciplines
Research Projects
Organizational Units
Position
Introduction
Research Interests

Search Results

Now showing 1 - 10 of 66
  • Publication
    Reliable Interdomain Routing Through Multiple Complementary Routing Processes
    (2008-10-15) Liao, Yong; Gao, Lixin; Guérin, Roch A; Zhang, Zhi-Li
    The Internet inter-domain routing protocol, BGP, experiences frequent routing disruptions such as transient routing loops or loss of connectivity. The goal of this paper is to address this issue while preserving BGP’s benefits in terms of operational maturity and flexibility in accommodating diverse policies. In realizing this goal, we apply to inter-domain routing a common concept in the design of highly reliable systems, namely, the use of redundancy, which we introduce in a manner that maximizes compatibility with the existing BGP protocol. The basic idea is to run several, mostly unchanged BGP processes that compute complementary routes, so that in the presence of network instabilities a working path remains available to any destination. The paper outlines the design of this approach and compares it to previously proposed alternatives. The benefits of the scheme are demonstrated using actual BGP data and realistic simulations.
  • Publication
    Modeling the dynamics of network technology adoption and the role of converters
    (2009-06-22) Sen, Soumya; Guérin, Roch; Hosanagar, Kartik; Jin, Youngmi
    New network technologies constantly seek to displace incumbents. Their success depends on technological superiority, the size of the incumbent's installed base, users' adoption behaviors, and various other factors. The goal of this paper is to develop an understanding of competition between network technologies, and identify the extent to which different factors, in particular converters (a.k.a. gateways), affect the outcome. Converters can help entrants overcome the influence of the incumbent's installed base by enabling cross-technology inter-operability. However, they have development, deployment, and operations costs, and can introduce performance degradations and functionality limitations, so that if, when, why, and how they help is often unclear. To this end, the paper proposes and solves a model for adoption of competing network technologies by individual users. The model incorporates a simple utility function that captures key aspects of users' adoption decisions. Its solution reveals a number of interesting and at times unexpected behaviors, including the possibility for converters to reduce overall market penetration of the technologies and to prevent convergence to a stable state; something that never arises in their absence. The findings were tested for robustness, e.g., different utility functions and adoption models, and found to remain valid across a broad range of scenarios.
  • Publication
    Functionality-rich Versus Minimalist Platforms: A Two-sided Market Analysis
    (2011-07-24) Sen, Soumya; Guérin, Roch A; Hosanagar, Kartik
    Should a new ``platform'' target a functionality-rich but complex andexpensive design or instead opt for a bare-bone but cheaper one? This is afundamental question with profound implications for the eventual success ofany platform. A general answer is, however, elusive as it involves a complextrade-off between benefits and costs. The intent of this paper is tointroduce an approach based on standard tools from the fields of marketing andeconomics, which can offer some insight into this difficult question. Wedemonstrate its applicability by developing and solving a generic model thatincorporates key interactions between platform stakeholders. The solutionconfirms that the ``optimal'' number of features a platform should offerstrongly depends on variations in cost factors. More interestingly, it revealsa high sensitivity to small relative changes in those costs. The paper'scontribution and motivation are in establishing the potential of such across-disciplinary approach for providing qualitative and quantitativeinsights into the complex question of platform design.
  • Publication
    Migrating the Internet to IPv6: An Exploration of the When and Why
    (2015-02-24) Nikkhah, Mehdi; Guerin, Roch
    The paper documents and to some extent elucidates the progress of IPv6 across major Internet stakeholders since its introduction in the mid 90’s. IPv6 offered an early solution to a well-understood and well-documented problem IPv4 was expected to encounter. In spite of early standardization and awareness of the issue, the Internet’s march to IPv6 has been anything but smooth, even if recent data point to an improvement. The paper documents this progression for several key Internet stakeholders using available measurement data, and identifies changes in the IPv6 ecosystem that may be in part responsible for how it has unfolded. The paper also develops a stylized model of IPv6 adoption across those stakeholders, and validates its qualitative predictive ability by comparing it to measurement data.
  • Publication
    Always Acyclic Distributed Path Computation
    (2008-05-20) Guérin, Roch A; Ray, Saikat; Kwong, Kin-Wah (Eric); Sofia, Rute
    Distributed routing algorithms may give rise to transient loops during path recomputation, which can pose significant stability problems in high-speed networks. We present a new algorithm, Distributed Path Computation with Intermediate Variables (DIV), which can be combined with any distributed routing algorithm to guarantee that the directed graph induced by the routing decisions remains acyclic at all times. The key contribution of DIV, besides its ability to operate with any routing algorithm, is an update mechanism using simple message exchanges between neighboring nodes that guarantees loop-freedom at all times. DIV provably outperforms existing loop-prevention algorithms in several key metrics such as frequency of synchronous updates and the ability to maintain paths during transitions. Simulation results quantifying these gains in the context of shortest path routing are presented. In addition, DIV's universal applicability is illustrated by studying its use with a routing that operates according to a non-shortest path objective. Specifically, the routing seeks robustness against failures by maximizing the number of next-hops available at each node for each destination.
  • Publication
    A Simple FIFO-Based Scheme for Differentiated Loss Guarantees
    (2006-07-22) Huang, Yaqing; Guérin, Roch A
    Today’s Internet carries traffic from a broad range of applications with different requirements. This has stressed its original, one-class, best-effort model, and has been a major driver of the many efforts aimed at introducing QoS. These efforts have, however, been met with only limited success, in part because the complexity they add is often at odds with the scalability requirements of the Internet. This has motivated many investigations for solutions that offer a better trade-off between service differentiation and complexity. This paper shares similar goals and proposes a simple scheme, Bounded Random Drop (BRD), that supports multiple service classes and is implemented using a single FIFO queue and a basic random dropping mechanism. BRD focuses on loss differentiation, as although losses and delay are both important, the steady rise of Internet link speeds is progressively limiting the impact of delay differentiation. It offers strong loss differentiation capabilities, and does not require traffic profiles or admission controls. BRD guarantees each class losses that, when feasible, are no worse than a specified bound, while enforcing differentiation only when required to meet those bounds. The performance of BRD is investigated for a broad range of traffic mixes and shown to consistently achieve its design goals.
  • Publication
    SICAP, A Shared-segment Inter-domain Control Aggregation Protocol
    (2003-06-24) Sofia, Rute; Guérin, Roch A; Veiga, Pedro
    Existing Quality of Service models are well defined in the data path, but lack an end-to-end control path mechanism that guarantees the required resources to bandwidth intensive services, such as video streaming. Current reservation protocols provide scalable resource reservation inside routing domains. However, it is primarily between such domains that scalability becomes a major issue, since inter-domain links experience large volumes of reservation requests. As a possible solution, we present and evaluate the Shared-segment based Inter-domain Control Aggregation Protocol, (SICAP) which affords the benefits of shared-segment aggregation, while avoiding its major drawback, namely, its sensitivity to the intensity of requests [l]. We present results of simulations that compare the performance of SICAP against that of the Border Gateway Reservation Protocol, (BGRP) which relies on sink-tree aggregation to achieve scalability.
  • Publication
    Individual QoS versus Aggregate QoS: A Loss Performance Study
    (2002-06-23) Xu, Ying; Guérin, Roch A
    This papers explores, primarily by means of analysis, the differences that can exist between individual and aggregate loss guarantees in an environment where guarantees are only provided at an aggregate level. The focus is on understanding which traffic parameters are responsible for inducing possible deviations and to what extent. In addition, we seek to evaluate the level of additional resources, e.g., bandwidth or buffer, required to ensure that all individual loss measures remain below their desired target. The paper’s contributions are in developing analytical models that enable the evaluation of individual loss probabilities in settings where only aggregate losses are controlled, and in identifying traffic parameters that play a dominant role in causing differences between individual and aggregate losses. The latter allows the construction of guidelines identifying what kind of traffic can be safely multiplexed into a common service class.
  • Publication
    On the Impact of Policing and Rate Guarantees in Diff-Serv Networks: A Video Streaming Application Perspective
    (2001-08-01) Ashmawi, Wael; Guérin, Roch A; Wolf, Stephen; Pinson, Margaret
    Over the past few years, there have been a number of proposals aimed at introducing different levels of service in the Internet. One of the more recent proposals is the Differentiated Services (Diff-Serv) architecture, and in this paper we explore how the policing actions and associated rate guarantees provided by the Expedited Forwarding (EF) translate into perceived benefits for applications that are the presumed users of such enhancements. Specifically, we focus on video streaming applications that arguably have relatively strong service quality requirements, and which should, therefore, stand to benefit from the availability of some form of enhanced service. Our goal is to gain a better understanding of the relation that exists between application level quality measures and the selection of the network level parameters that govern the delivery of the guarantees that an EF based service would provide. Our investigation, which is experimental in nature, relies on a number of standard streaming video servers and clients that have been modified and instrumented to allow quantification of the perceived quality of the received video stream. Quality assessments are performed using a Video Quality Measurement tool based on the ANSI objective quality standard. Measurements were made over both a local Diff-Serv testbed and across the QBone, a QoS enabled segment of the Internet2 infrastructure. The paper reports and analyzes the results of those measurements.
  • Publication
    Making IGP Routing Robust to Link Failures
    (2005-05-02) Sridharan, Ashwin; Guérin, Roch A
    An important requirement of a robust traffic engineering solution is insensitivity to changes, be they in the form of traffic fluctuations or changes in the network topology because of link failures. In this paper we focus on developing a fast and effective technique to compute traffic engineering solutions for OSPF/IS-IS environments that are robust to link failures in the logical topology. OSPF and IS-IS are the dominant intra-domain routing protocols where traffic engineering is primarily governed by link weights. Our focus is on computing a single set of link weights for a traffic engineering instance that performs well over all single logical link failures. Such types of failures, although usually not long lasting, of the order of tens of minutes, can occur with high enough frequency, of the order of several a day, to significantly affect network performance. The relatively short duration of such failures coupled with issues of computational complexity and convergence time due to the size of current day networks discourage adaptive reactions to such events. Consequently, it is desirable to a priori compute a routing solution that performs well in all such scenarios. Through computational evaluations we demonstrate that our technique yields link weights that perform well over all single link failures and also scales well, in terms of computational complexity, with the size of the network.