Khanna, Sanjeev
Email Address
ORCID
Disciplines
Search Results
Now showing 1 - 10 of 58
Publication On Broadcast Disk Paging(1999-03-13) Khanna, Sanjeev; Khanna, Sanjeev; Liberatore, VincenzoBroadcast disks are an emerging paradigm for massive data dissemination. In a broadcast disk, data is divided into n equal-sized pages, and pages are broadcast in a round-robin fashion by a server. Broadcast disks are effective because many clients can simultaneously retrieve any transmitted data. Paging is used by the clients to improve performance, much as in virtual memory systems. However, paging on broadcast disks differs from virtual memory paging in at least two fundamental aspects: - A page fault in the broadcast disk model has a variable cost that depends on the requested page as well as the current state of the broadcast. - Prefetching is both natural and a provably essential mechanism for achieving significantly better competitive ratios in broadcast disk paging. In this paper, we design a deterministic algorithm that uses prefetching to achieve an O(n log k) competitive ratio for the broadcast disk paging problem, where k denotes the size of the client's cache. We also show a matching lower bound of Ω(n log k) that applies even when the adversary is not allowed to use prefetching. In contrast, we show that when prefetching is not allowed, no deterministic online algorithm can achieve a competitive ratio better than Ω(nk). Moreover, we show a lower bound of Ω(n log k) on the competitive ratio achievable by any nonprefetching randomized algorithm against an oblivious adversary. These lower bounds are trivially matched from above by known results about deterministic and randomized marking algorithms for paging. An interpretation of our results is that in the broadcast disk paging, prefetching is a perfect substitute for randomization.Publication Approximation Schemes for Preemptive Weighted Flow Time(2002-05-19) Khanna, Sanjeev; Khanna, SanjeevWe present the first approximation schemes for minimizing weighted flow time on a single machine with preemption. Our first result is an algorithm that computes a (1 + ε)- approximate solution for any instance of weighted flow time in O(nO(ln W ln P/ε3) time; here P is the ratio of maximum job processing time to minimum job processing time, and W is the ratio of maximum job weight to minimum job weight. This result directly gives a quasi-PTAS for weighted flow time when P and W are poly-bounded, and a PTAS when they are both O(1). We strengthen the former result to show that in order to get a quasi-PTAS it suffices to have just one of P and W to be poly-bounded. Our result provides strong evidence to the hypothesis that the weighted flow time problem has a PTAS. We note that the problem is strongly NP-hard even when P and W are O(1). We next consider two important special cases of weighted flow time, namely, when P is O(1) and W is arbitrary, and when the weight of a job is inverse of its processing time referred to as the stretch metric. For both of the above special cases we obtain a (1 + ε)-approximation for any ε > 0 by using a randomized partitioning scheme to reduce an arbitrary instance to several instances all of which have P and W bounded by a constant that depends only on ε.Publication A PTAS for Minimizing Average Weighted Completion Time With Release Dates on Uniformly Related Machines(2000-01-01) Khanna, Sanjeev; Khanna, SanjeevA classical scheduling problem is to find schedules that minimize average weighted completion time of jobs with release dates. When multiple machines are available, the machine environments may range from identical machines (the processing time required by a job is invariant across the machines) at one end, to unrelated machines (the processing time required by a job on any machine is an arbitrary function of the specific machine) at the other end of the spectrum. While the problem is strongly NP-hard even in the case of a single machine, constant factor approximation algorithms have been known for even the most general machine environment of unrelated machines. Recently, a polynomial-time approximation scheme (PTAS) was discovered for the case of identical parallel machines [1]. In contrast, it is known that this problem is MAX SNP-hard for unrelated machines [10]. An important open problem is to determine the approximability of the intermediate case of uniformly related machines where each machine i has a speed si and it takes p/si time to executing a job of processing size pIn this paper, we resolve this problem by obtaining a PTAS for the problem. This improves the earlier known ratio of (2 + ∈) for the problem.Publication Selection with Monotone Comparison Costs(2003-01-12) Kannan, Sampath; Khanna, Sanjeev; Kannan, Sampath; Khanna, SanjeevWe consider the problem of selecting the rth -smallest element from a list of nelements under a model where the comparisons may have different costs depending on the elements being compared. This model was introduced by [3] and is realistic in the context of comparisons between complex objects. An important special case of this general cost model is one where the comparison costs are monotone in the sizes of the elements being compared. This monotone cost model covers most "natural" cost models that arise and the selection problem turns out to be the most challenging one among the usual problems for comparison-based algorithms. We present an O(log2 n)-competitive algorithm for selection under the monotone cost model. This is in contrast to an Ω (n)lower bound that is known for arbitrary comparison costs. We also consider selection under a special case of monotone costs—-the min model where the cost of comparing two elements is the minimum of the sizes. We give a randomized O(1)-competitive algorithm for the min model.Publication Archiving Scientific Data(2002-06-04) Khanna, Sanjeev; Khanna, Sanjeev; Tajima, Keishi; Tan, Wang-ChiewWe present an archiving technique for hierarchical data with key structure. Our approach is based on the notion of timestamps whereby an element appearing in multiple versions of the database is stored only once along with a compact description of versions in which it appears. The basic idea of timestamping was discovered by Driscoll et al. in the context of persistent data structures where one wishes to track the sequences of changes made to a data structure. We extend this idea to develop an archiving tool for XML data that is capable of providing meaningful change descriptions and can also efficiently support a variety of basic functions concerning the evolution of data such as retrieval of any specific version from the archive and querying the temporal history of any element. This is in contrast to diff-based approaches where such operations may require undoing a large number of changes or significant reasoning with the deltas. Surprisingly, our archiving technique does not incur any significant space overhead when contrasted with other approaches. Our experimental results support this and also show that the compacted archive file interacts well with other compression techniques. Finally, another useful property of our approach is that the resulting archive is also in XML and hence can directly leverage existing XML tools.Publication On Computing Functions with Uncertainty(2001-05-21) Khanna, Sanjeev; Khanna, Sanjeev; Tan, Wang-ChiewWe study the problem of computing a function f(x1, ..., xn) given that the actual values of the variables xi's are known only with some uncertainity. For each variable xi, an interval Ii is known such that the value of xi is guaranteed to fall within this interval. Any such interval can be probed to obtain the actual value of the underlying variable; however, there is a cost associated with each such probe. The goal is to adaptively identify a minimum cost sequence of probes such that regardless of the actual values taken by the unprobed xi's, the value of the function f can be computed to within a specified precision. We design online algorithms for this problem when f is either the selection function or an aggregation function such as sum or average. We consider three natural models of precision and give algorithms for each model. We analyze our algorithms in the framework of competitive analysis and show that our algorithms are asymptotically optimal. Finally, we also study online algorithms for functions that are obtained by composing together selection and aggregation functions.Publication The Approximability of Constraint Satisfaction Problems(2000-01-01) Khanna, Sanjeev; Khanna, Sanjeev; Sudan, Madhu; Trevisan, Luca; Williamson, David PWe study optimization problems that may be expressed as "Boolean constraint satisfaction problems". An instance of a Boolean constraint satisfaction problem is given by m constraints applied to n Boolean variables. Different computational problems arise from constraint satisfaction problems depending on the nature of the "underlying" constraints as well as on the goal of the optimization task. Here we consider four possible goals: MAX CSP (MIN CSP) is the class of problems where the goal is to find an assignment maximizing the number of satisfied constraints (minimizing the number of unsatisfied constraints). MAX ONES (MIN ONES) is the class of optimization problems where the goal is to find an assignment satisfying all constraints with maximum (minimum) number of variables set to 1. Each class consists of infinitely many problems and a problem within a class is specified by a finite collection of finite Boolean functions that describe the possible constraints that may be used. Tight bounds on the approximability of every problem in MAX CSP were obtained by Creignou [11]. In this work we determine tight bounds on the "approximability" (i.e., the ratio to within which each problem may be approximated in polynomial time) of every problem in MAX ONES, MIN CSP and MIN ONES. Combined with the result of Creignou, this completely classifies all optimization problems derived from Boolean constraint satisfaction. Our results capture a diverse collection of optimization problems such as MAX 3-SAT, MAX CUT, MAX CLIQUE, MIN CUT, NEAREST CODEWORD etc. Our results unify recent results on the (in)approximability of these optimization problems and yield a compact presentation of most known results. Moreover, these results provide a formal basis to many statements on the behavior of natural optimization problems, that have so far only been observed empirically.Publication Power-Conserving Computation of Order-Statistics over Sensor Networks(2004-06-14) Khanna, Sanjeev; Khanna, SanjeevWe study the problem of power-conserving computation of order statistics in sensor networks. Significant power-reducing optimizations have been devised for computing simple aggregate queries such as COUNT, AVERAGE, or MAX over sensor networks. In contrast, aggregate queries such as MEDIAN have seen little progress over the brute force approach of forwarding all data to a central server. Moreover, battery life of current sensors seems largely determined by communication costs - therefore we aim to minimize the number of bytes transmitted. Unoptimized aggregate queries typically impose extremely high power consumption on a subset of sensors located near the server. Metrics such as total communication cost underestimate the penalty of such imbalance: network lifetime may be dominated by the worst-case replacement time for depleted batteries. In this paper, we design the first algorithms for computing order-statistics such that power consumption is balanced across the entire network. Our first main result is a distributed algorithm ε-approximate quantile summary of the sensor data such that each sensor transmits only O(log2n/ε) data values, irrespective of the network topology, an improvement over the current worst-case behavior of Ω(n). Second, we show an improved result when the height, h, of the network is significantly smaller than n. Our third result is that we can exactly compute any order statistic (e.g., median) in a distributed manner such that each sensor needs to transmit O(log3n) values. Further, we design the aggregates used by our algorithms to be decomposable. An aggregate Q over a set S is decomposable if there exists a function, f, such that for all S = S1 ∪ S2, Q(S) = f(Q(S1),Q(S2)). We can thus directly apply existing optimizations to decomposable aggregates that inrease error-resilience and reduce communication cost. Finally, we validate our results empirically, through simulation. When we compute the median exactly, we show that, even for moderate size networks, the worst communication cost for any single node is several times smaller than the corresponding cost in prior median algorithms. We show similar cost reductions when computing approximate order-statistic summaries with guaranteed precision. In all cases, our total communication cost over the entire network is smaller than or equal to the total cost of prior algorithms.Publication Time-Constrained Scheduling of Weighted Packets On Trees and Meshes(2003-04-01) Khanna, Sanjeev; Khanna, Sanjeev; Rajaraman, Rajmohan; Rosén, AdiThe time-constrained packet routing problem is to schedule a set of packets to be transmitted through a multi-node network, where every packet has a source and a destination (as in traditional packet routing problems) as well as a release time and deadline. The objective is to schedule the maximum number of packets subject to deadline constraints. This problem is studied in [1], where it is shown that the problem is NP-Complete even when the underlying topology is a linear array. Approximation algorithms are also provided in [1] for the linear array and the unidirectional ring for both the case where packets may be buffered in transit and the case where they may not be. In this paper we extend the results of [1] in two directions. First, we consider the more general network topologies of trees and 2-dimensional meshes. Second, we associate with each packet a measure of utility, called a weight, and study the problem of maximizing the total weight of the packets that are scheduled subject to their timing constraints. For the bufferless case, we provide constant factor approximation algorithms for the time-constrained scheduling problem with weighted packets on trees and mashes. We also provide logarithmic approximations for the same problems in the buffered case. These results are complemented by new lower bounds, which demonstrate that we cannot hope to achieve the same results for general network topologies. For example, we show that if k packets are require to follow prescribed paths in an arbitrary graph, then unless NP = ZPP, there is no polynomial-time k1-ε-approximation, for any ε > 0, to the optimal set of packets that can be scheduled.Publication Randomized Pursuit–Evasion in a Polygonal Environment(2005-10-01) Kannan, Sampath; Khanna, Sanjeev; Kannan, Sampath; Khanna, SanjeevThis paper contains two main results. First, we revisit the well-known visibility-based pursuit–evasion problem, and show that in contrast to deterministic strategies, a single pursuer can locate an unpredictable evader in any simply connected polygonal environment, using a randomized strategy. The evader can be arbitrarily faster than the pursuer, and it may know the position of the pursuer at all times, but it does not have prior knowledge of the random decisions made by the pursuer. Second, using the randomized algorithm, together with the solution to a problem called the “lion and man problem” as subroutines, we present a strategy for two pursuers (one of which is at least as fast as the evader) to quickly capture an evader in a simply connected polygonal environment. We show how this strategy can be extended to obtain a strategy for a polygonal room with a door, two pursuers who have only line-of-sight communication, and a single pursuer (at the expense of increased capture time).