Communication Cost Research Papers - Academia.edu (original) (raw)

Wireless sensor networks are small, inexpensive and flexible computational platforms, that have found popular applications in various areas including environmental monitoring, health care and disaster recovery. One fundamental question is... more

Wireless sensor networks are small, inexpensive and flexible computational platforms, that have found popular applications in various areas including environmental monitoring, health care and disaster recovery. One fundamental question is how to place the nodes in the network so that complete coverage of the monitored area is achieved. In this paper, we use techniques from discrepancy theory that accurately represent the uncovered area using just a few discrete points, to make sure that every point in the network is covered by at least k sensors, where k is calculated based on user reliability requirements. Our technique is fully distributed, deploying a low number of sensors, and minimizes the communication costs. Our experiments demonstrate that our technique is highly effective in achieving a reliable restoration of a given sensor network area.

In this paper, the problem of computing a one-dimensional FFT on a c-dimensional torus multicomputer is focused. Different approaches are proposed which differ in the way they use the interconnection network of the torus. One of the... more

In this paper, the problem of computing a one-dimensional FFT on a c-dimensional torus multicomputer is focused. Different approaches are proposed which differ in the way they use the interconnection network of the torus. One of the approaches is based on the multidimensional index mapping technique for FFT computation. A second approach is based on embedding on the torus a hypercube algorithm for computing the radix-2 Cooley-Tukey FFT. The third approach reduces the communication cost of the hypercube algorithm through the communication pipelining technique. Analytical models are presented to compare the different approaches. Finally, some performance estimates are given to illustrate the comparison.

This paper presents a framework that develops algorithms for solving combined locational and multihop routing optimization problems. The objective is to determine resource node locations in a multiagent network and to specify the multihop... more

This paper presents a framework that develops algorithms for solving combined locational and multihop routing optimization problems. The objective is to determine resource node locations in a multiagent network and to specify the multihop routes from each agent to a common destination through a network of resource nodes that minimize total communication cost. These problems are computationally complex (NP-hard) where

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. For several of the proposed models of... more

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. For several of the proposed models of snoopy caching, we present new on-line algorithms which decide, for each cache, which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

In this paper we investigate the use of hierarchical reinforcement learning to speed up the acquisition of cooperative multi-agent tasks. We extend the MAXQ framework to the multi-agent case. Each agent uses the same MAXQ hierarchy to... more

In this paper we investigate the use of hierarchical reinforcement learning to speed up the acquisition of cooperative multi-agent tasks. We extend the MAXQ framework to the multi-agent case. Each agent uses the same MAXQ hierarchy to decompose a task into sub-tasks. Learning is decentralized, with each agent learning three interrelated skills: how to perform subtasks, which order to do them in, and how to coordinate with other agents. Coordination skills among agents are learned by using joint actions at the highest level(s) of the hierarchy. The Q nodes at the highest level(s) of the hierarchy are configured to represent the joint task-action space among multiple agents. In this approach, each agent only knows what other agents are doing at the level of sub-tasks, and is unaware of lower level (primitive) actions. This hierarchical approach allows agents to learn coordination faster by sharing information at the level of sub-tasks, rather than attempting to learn coordination taking into account primitive joint state-action values. We apply this hierarchical multi-agent reinforcement learning algorithm to a complex AGV scheduling task and compare its performance and speed with other learning approaches, including flat multi-agent, single agent using MAXQ, selfish multiple agents using MAXQ (where each agent acts independently without communicating with the other agents), as well as several well-known AGV heuristics like "first come first serve", "highest queue first" and "nearest station first". We also compare the tradeoffs in learning speed vs. performance of modeling joint action values at multiple levels in the MAXQ hierarchy. * Currently at Agilent Technologies, CA.

Peer-to-Peer (P2P) networks offer reliable and efficient services in different application scenarios. In particular, structured P2P protocols (like Chord [1]) have to handle changes in the overlay topology fast and with as little... more

Peer-to-Peer (P2P) networks offer reliable and efficient services in different application scenarios. In particular, structured P2P protocols (like Chord [1]) have to handle changes in the overlay topology fast and with as little signaling overhead as possible. This paper analyzes the ability of the Chord protocol to keep the network structure up to date, even in environments with high churn rates, i.e. nodes joining and leaving the network frequently. High churn rates occur, e.g., in mobile environments, where participants have to deal with the limited resources of their mobile devices, such as short battery lifetimes or high comnmunication costs. In this paper, we analyze different design parameters and their influence on the stability of Chord-based network structures. We also present several modifications to the basic Chord stabilization scheme, resulting in a much more stable overlay topology.

Group key management is a significant concern in the wireless multicast security. This paper is based on the Scalable and Secure Multicast Group Key Management using Boolean Function Simplification method together with the absolute... more

Group key management is a significant concern in the wireless multicast security. This paper is based on the Scalable and Secure Multicast Group Key Management using Boolean Function Simplification method together with the absolute encoder output type code named Gray Code. The most important problem in secure group communication is group dynamics and key management. A scalable secure group communication model guarantees that whenever there occurs a membership change, a new group key is generated and distributed to the group members with reduced computation and communication cost. In this technique to exchange new group key securely, the entire communications are done with the use of hash values and XOR operations which will reduce the communication overhead i.e., rekeying cost is significantly reduced. The proposed member removal technique uses the reflected binary code called Gray Code and outperforms all other schemes known to us in terms of message complexity. Most significantly, the proposed technique is better in reducing the number of rekeying messages when multiple members leave the session.

A mobile user may voluntarily disconnect itself from the Web server to save its battery life and avoid high communication prices. To allow Web pages to be updated while the mobile user is disconnected from the Web server, updates can be... more

A mobile user may voluntarily disconnect itself from the Web server to save its battery life and avoid high communication prices. To allow Web pages to be updated while the mobile user is disconnected from the Web server, updates can be staged in the mobile unit and propagated back to the Web server upon reconnection. We analyze algorithms for supporting disconnected write operations and develop a performance model which helps identify the optimal length of the disconnection period under which the cost of update propagation is minimized. The analysis result is particularly applicable to Web applications which allow wireless mobile users to modify Web contents while on the move. We also show how the result can be applied to real-time Web applications such that the mobile user can determine the longest disconnection period such that it can still propagate updates to the server before the deadline with a minimum communication cost

The problem of Track-to-Track Fusion (T2TF) is very important for distributed tracking systems. It allows the use of the hierarchical fusion structure, where local tracks are sent to the fusion center (FC) as summaries of local... more

The problem of Track-to-Track Fusion (T2TF) is very important for distributed tracking systems. It allows the use of the hierarchical fusion structure, where local tracks are sent to the fusion center (FC) as summaries of local information about the states of the targets, and fused to get the global track estimates. Compared to the centralized measurement-to-track fusion (CTF), the T2TF approach has low communication cost and is more suitable for practical implementation. Although having been widely investigated in the literature, most T2TF algorithms dealt with the fusion of homogenous tracks that have the same state of the target. However, in general, local trackers may use different motion models for the same target, and have different state spaces. This raises the problem of Heterogeneous Track-to-Track Fusion (HT2TF). In this paper, we propose the algorithm for HT2TF based on the generalized Information Matrix Fusion (GIMF) to handle the fusion of heterogenous tracks in the presence of possible communication delays. Compared to the fusion based on the LMMSE criterion, the proposed algorithm does not require the crosscovariance between the tracks for the fusion, which greatly simplify its implementation. Simulation results show that the proposed HT2TF algorithm has good consistency and fusion accuracy.

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on... more

In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no onqine algorithm has this property with a smaller constant.

A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides... more

A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes. of nearby servers, anonymity, search, authentication, and hierarchical naming. Despite this rich set of features, the core operation in most peer-to-peer systems is efficient location of data items. The contribution of this paper is a scalable protocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures.

Cloud computing is an emerging technology that allows users to utilize on-demand computation, storage, data and services from around the world. However, Cloud service providers charge users for these services. Specifically, to access data... more

Cloud computing is an emerging technology that allows users to utilize on-demand computation, storage, data and services from around the world. However, Cloud service providers charge users for these services. Specifically, to access data from their globally distributed storage edge servers, providers charge users depending on the user's location and the amount of data transferred. When deploying data-intensive applications in a Cloud computing environment, optimizing the cost of transferring data to and from these edge servers is a priority, as data play the dominant role in the application's execution. In this paper, we formulate a non-linear programming model to minimize the data retrieval and execution cost of data-intensive workflows in Clouds. Our model retrieves data from Cloud storage resources such that the amount of data transferred is inversely proportional to the communication cost. We take an example of an 'intrusion detection' application workflow, where the data logs are made available from globally distributed Cloud storage servers. We construct the application as a workflow and experiment with Cloud based storage and compute resources. We compare the cost of multiple executions of the workflow given by a solution of our non-linear program against that given by Amazon CloudFront's 'nearest' single data source selection. Our results show a savings of three-quarters of total cost using our model.

This study deals with the problem of maintaining large hierarchy folder replicated in a distributed environment. It was found that this problem afflicted a number of important applications, such as synchronization of Hierarchy Folders... more

This study deals with the problem of maintaining large hierarchy folder replicated in a distributed environment. It was found that this problem afflicted a number of important applications, such as synchronization of Hierarchy Folders between Peer-to-peer environments, synchronization of data between accounts or devices, content distribution and web caching networks, web site mirroring, storage networks, and large scale web search and mining. At the core of the problem lay the File in Hierarchy Folder synchronization challenge. This challenge posed the question; “Given two versions of files inside Folders on different machines, call outdated hierarchy and a current one, how can we update the outdated version with a minimum communication cost, by exploiting the significant similarity between the versions?” Although a popular open source tool for this problem called RSYNC is being used in hundreds of thousands of servers around the world, only very few attempts have been made to impro...

With increased global connectivity, managers are faced with new technologies and rapid organizational changes. For instance, organizations may adopt emerging technologies such as Instant Messaging in order to increase collaboration at a... more

With increased global connectivity, managers are faced with new technologies and rapid organizational changes. For instance, organizations may adopt emerging technologies such as Instant Messaging in order to increase collaboration at a distance and to decrease communications costs. However, the impact and implications of these technologies for managers and employees often go far beyond the original intent of the technology designers. Consequently, in this study, instant messaging (IM) and its use in organizations were investigated through interviews with employees. Results suggest that critical mass represents an important factor for IM success in the workplace that IM symbolizes informality, and that IM is perceived to be much less rich than face-to-face communication. Further, results demonstrate that employees use IM not only as a replacement for other communication media but as an additional method for reaching others. With IM, employees engage in polychronic communication, view IM as privacy enhancing, and see its interruptive nature as unfair. The paper concludes by discussing research and practice implications for organizational psychologists.

The 2002 global ICT rankings by the International Telecommunications Union (ITU) ranked Nigeria 27 th among 51 African countries and 153 rd among 178 countries in the world. It was against this background that the paper investigated the... more

The 2002 global ICT rankings by the International Telecommunications Union (ITU) ranked Nigeria 27 th among 51 African countries and 153 rd among 178 countries in the world. It was against this background that the paper investigated the state of ICT in the Nigerian construction industry to highlight the level of ICT penetration, its impact in the industry and the constraints to its adoption. The study identified the factors significantly impacting the level of ICT use, grouping them into those internal to the industry and those external to it. A total of 136 respondents to a questionnaire survey, comprising, contractors, consultants and academic researchers, provided empirical data for the analysis. The results showed that some internal factors, i.e., the type of business (whether contracting, consulting or academic), chief executive officers (CEOs)/senior managers' perception of the benefits of ICT and the years of computer literacy of the CEOs/senior managers were significantly correlated with the level of ICT use in the industry. However, none of the external factors were significantly correlated with the level of ICT use. The main uses of ICT in the industry are word processing, Internet communications, costing and work scheduling. The top five constraints to the use of ICT are insufficient/irregular power supply, high cost of ICT software and hardware, low job order for firms, fear of virus attacks and high rate of obsolescence of ICT software and hardware. A comparison with the results of similar studies in some industrialised and newly industrialised countries indicated that the proportion of firms using the computer is quite high for a developing like Nigeria. It also highlighted the large gap in access to electricity and other communications infrastructure between developed and developing countries.

Query processing is an important concern in the field of distributed databases. The main problem is: if a query can be decomposed into subqueries that require operations at geographically separated databases, determine the sequence and... more

Query processing is an important concern in the field of distributed databases. The main problem is: if a query can be decomposed into subqueries that require operations at geographically separated databases, determine the sequence and the sites for performing this set of operations such that the operating cost (communication cost and processing cost) for processing this query is minimized. The problem is complicated by the fact that query processing not only depends on the operations of the query, but also on the parameter values associated with the query. Distributed query processing is an important factor in the overall performance of a distributed database system.

A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides... more

A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.

Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting... more

Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate beta\betabeta, the average communication

Summary Query processing is an important concern in the field of distributed databases. The main problem is: if a query can be decomposed into subqueries that require operations at geographically separated databases, determine the... more

Summary Query processing is an important concern in the field of distributed databases. The main problem is: if a query can be decomposed into subqueries that require operations at geographically separated databases, determine the sequence and the sites for performing this set of operations such that the operating cost (communication cost and processing cost) for processing this query is minimized.

This work presents Petri nets as an intermediate model for hardware/software codesign. The main reason of using of Petri nets is to provide a model that allows for formal qualitative and quantitative analysis in order to perform... more

This work presents Petri nets as an intermediate model for hardware/software codesign. The main reason of using of Petri nets is to provide a model that allows for formal qualitative and quantitative analysis in order to perform hardware/software partitioning. Petri nets as an intermediate model allows one to analyze properties of the specification and formally compute performance indices which are used in the partitioning process. This paper highlights methods of computing load balance, mutual exclusion degree and communication cost of behavioral description in order to perform the initial allocation and the partitioning. This work is also devoted to describing a method for estimating hardware area, and it also presents an overview of the general partitioning method considering multiple software components.

Image segmentation can be performed by recursively splitting the whole image or by merging together a large number of minute regions until a specified condition is satisfied. The split and merge procedure of image segmentation takes an... more

Image segmentation can be performed by recursively splitting the whole image or by merging together a large number of minute regions until a specified condition is satisfied. The split and merge procedure of image segmentation takes an intermediate level in an image description as the starting cutset and thereby achieves a compromise between merging small primitive regions and recursively splitting the whole image in order to reach the desired final cutset. The choice of the initial cutset offers significant savings in computation time. A 2-D array implementation of image segmentation by a directed split and merge procedure has been proposed in this paper. Parallelism is realized on two levels: one within the split and merge operations, where more than one merge (or split) may proceed concurrently, and the second between the split and merge operations, where several splits may be performed in parallel with merges. Both the split and merge operations are based on nearest neighbor communications between the processing elements (PE's) and therefore facilitate low communication costs. The basic arithmetic operations required to perform split and merge are comparison and addition. This allows a simple structure of the PE as well as a hardwired control. A local memory of 512 bytes is sufficient to hold the interim data associated with each PE. A prototype of the PE has been constructed using the 3-pm double metal CMOS technology. Considering a scaling for up to 0.8 pm, it is possible to incorporate 16 PE's on a 2-56 cm2 chip. With sufficiently large PE window sizes, image segmentation through the proposed approach can be achieved in linear time.

Image segmentation can be performed by recursively splitting the whole image or by merging together a large number of minute regions until a specified condition is satisfied. The split and merge procedure of image segmentation takes an... more

Image segmentation can be performed by recursively splitting the whole image or by merging together a large number of minute regions until a specified condition is satisfied. The split and merge procedure of image segmentation takes an intermediate level in an image description as the starting cutset and thereby achieves a compromise between merging small primitive regions and recursively splitting the whole image in order to reach the desired final cutset. The choice of the initial cutset offers significant savings in computation time. A 2-D array implementation of image segmentation by a directed split and merge procedure has been proposed in this paper. Parallelism is realized on two levels: one within the split and merge operations, where more than one merge (or split) may proceed concurrently, and the second between the split and merge operations, where several splits may be performed in parallel with merges. Both the split and merge operations are based on nearest neighbor communications between the processing elements (PE's) and therefore facilitate low communication costs. The basic arithmetic operations required to perform split and merge are comparison and addition. This allows a simple structure of the PE as well as a hardwired control. A local memory of 512 bytes is sufficient to hold the interim data associated with each PE. A prototype of the PE has been constructed using the 3-pm double metal CMOS technology. Considering a scaling for up to 0.8 pm, it is possible to incorporate 16 PE's on a 2-56 cm2 chip. With sufficiently large PE window sizes, image segmentation through the proposed approach can be achieved in linear time.

The operator-Schmidt decomposition is useful in quantum information theory for quantifying the nonlocality of bipartite unitary operations. We construct a family of unitary operators on Bbb Cn otimes Bbb Cn whose operator-Schmidt... more

The operator-Schmidt decomposition is useful in quantum information theory for quantifying the nonlocality of bipartite unitary operations. We construct a family of unitary operators on Bbb Cn otimes Bbb Cn whose operator-Schmidt decompositions are computed using the discrete Fourier transform. As a corollary, we produce unitaries on Bbb C3 otimes Bbb C3 with operator-Schmidt number S for every S in {1, ..., 9}. This corollary was unexpected, since it contradicted reasonable conjectures of Nielsen et al (2003 Phys. Rev. A 67 052301) based on intuition from a striking result in the two-qubit case. By the results of Dür et al (2002 Phys. Rev. Lett. 89 057901), who also considered the two-qubit case, our result implies that there are nine equivalence classes of unitaries on Bbb C3 otimes Bbb C3 which are probabilistically interconvertible by (stochastic) local operations and classical communication. As another corollary, a prescription is produced for constructing maximally-entangled unitaries from biunimodular functions. Reversing tact, we state a generalized operator-Schmidt decomposition of the quantum Fourier transform considered as an operator Bbb CM1 otimes Bbb CM2 rightarrow Bbb CN1 otimes Bbb CN2, with M1M2 = N1N2. This decomposition shows (by Nielsen's bound) that the communication cost of the QFT remains maximal when a net transfer of qudits is permitted. In an appendix, a canonical procedure is given for removing basis-dependence for results and proofs depending on the 'magic basis' introduced in S Hill and W Wootters (1997 Entanglement of a pair of quantum bits Phys Rev. Lett. 78 5022-5).

Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution... more

Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the 'execution time'. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We compare the cost savings when using PSO and existing 'Best Resource Selection' (BRS) algorithm. Our results show that PSO can achieve: a) as much as 3 times cost savings as compared to BRS, and b) good distribution of workload onto resources.

The design challenge for huge-scale multiprocessors is (1) to minimize communication overhead, (2) allow communication to overlap computation. and (3) coordinate the two without sacrificing processor cost/performance. We show that... more

The design challenge for huge-scale multiprocessors is (1) to minimize communication overhead, (2) allow communication to overlap computation. and (3) coordinate the two without sacrificing processor cost/performance. We show that existing message passing multiprocessors have unnecessarily high communication costs. Research prototypes of message driven machines demonstrate low communication overhead. but poor processor cost/performance. We introduce a simple communication mechanism. Active Messages. show that it is intrinsic to both architectures, allows cost effective use of the hardware, and offers tremendous flexibility. Impkmentations on nCUBE/2 and CM-5 are described and evaluated using a split-phase shared-memory extension to C. Spfif-C. We further show that active messages are sufficient to implement the dynamically scheduled languages for which message driven machines were designed. With this mechanism, latency tolerance becomes a programming/compiling concern. Hardware support for active messages is desirable and weoutline arangeof enhancements to mainstream processors. 0 1992 ACM O-89791 -509.7/92/0005/0256 $1 SO 430

ANTONELLI C. (2000) Collective knowledge communication and innovation: the evidence of technological districts, Reg. Studies 34, 535±547. Technological knowledge is a collective good in that its generation is the result of a process that... more

ANTONELLI C. (2000) Collective knowledge communication and innovation: the evidence of technological districts, Reg. Studies 34, 535±547. Technological knowledge is a collective good in that its generation is the result of a process that combines pieces of information and ...

Bloom filter based algorithms have proven successful as very efficient technique to reduce communication costs of database joins in a distributed setting. However, the full potential of bloom filters has not yet been exploited. Especially... more

Bloom filter based algorithms have proven successful as very efficient technique to reduce communication costs of database joins in a distributed setting. However, the full potential of bloom filters has not yet been exploited. Especially in the case of multi-joins, where the data is distributed among several sites, additional optimization opportunities arise, which require new bloom filter operations and computations. In this paper, we present these extensions and point out how they improve the performance of such distributed joins. While the paper focuses on efficient join computation, the described extensions are applicable to a wide range of usages, where bloom filters are facilitated for compressed set representation.

A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides... more

A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.

Objectives: To determine the economic impact of plague in three provinces of Cajamarca department in 1994 and 1999. Materials and methods: In this observational, cross-sectional and descriptive study, we estimated the direct and indirect... more

Objectives: To determine the economic impact of plague in three provinces of Cajamarca department in 1994 and 1999. Materials and methods: In this observational, cross-sectional and descriptive study, we estimated the direct and indirect costs (institutional, familial, and community costs) of plague in San Miguel, San Pablo, and Contumazá Provinces in Cajamarca Department for 1994 and 1999. Data for cost

Two kinds of tools are necessary to optimise the use of available resources by the execution of parallel programs on distributed memory systems: mapping and load balancing tools. A mapping tool is well suited for programs whose behaviour... more

Two kinds of tools are necessary to optimise the use of available resources by the execution of parallel programs on distributed memory systems: mapping and load balancing tools. A mapping tool is well suited for programs whose behaviour is predictable while for many "real applications", it needs to be complemented by a dynamic load balancing tool. Both tools are currently investigated to be included in the programming environment designed by the SEPP COPERNICUS project.

In this paper we formally define proof systems for functions and develop an example of such a proof with a constant number of rounds, which we modify (at no extra communication cost) into an identification scheme with secret key exchange... more

In this paper we formally define proof systems for functions and develop an example of such a proof with a constant number of rounds, which we modify (at no extra communication cost) into an identification scheme with secret key exchange for subsequent conventional encryption. Implemented on a standard 32-bit chip or similar, the whole protocol, which involves mutual identification of two users, exchange of a random common secret key, and verification of certificates for the public keys (RSA, 512 bits) takes less than 3 4 second.

This paper describes a technique for clustering homogeneously distributed data in a peer-to-peer environment like sensor networks. The proposed technique is based on the principles of the K-Means algorithm. It works in a localized... more

This paper describes a technique for clustering homogeneously distributed data in a peer-to-peer environment like sensor networks. The proposed technique is based on the principles of the K-Means algorithm. It works in a localized asynchronous manner by communicating with the neighboring nodes. The paper offers extensive theoretical analysis of the algorithm that bounds the error in the distributed clustering process compared to the centralized approach that requires downloading all the observed data to a single site. Experimental results show that, in contrast to the case when all the data is transmitted to a central location for application of the conventional clustering algorithm, the communication cost (an important consideration in sensor networks which are typically equipped with limited battery power) of the proposed approach is 0020-0255/$ -see front matter Ó Sciences 176 (2006Sciences 176 ( ) 1952Sciences 176 ( -1985 www.elsevier.com/locate/ins significantly smaller. At the same time, the accuracy of the obtained centroids is high and the number of samples which are incorrectly labeled is also small.

Designing a video-on-demand (VoD) system is in essence an optimization task aimed at minimizing the cost of communication and storage in the corresponding network. The decision variables of this problem are the locations of the nodes plus... more

Designing a video-on-demand (VoD) system is in essence an optimization task aimed at minimizing the cost of communication and storage in the corresponding network. The decision variables of this problem are the locations of the nodes plus the content which should be cached in each node. Furthermore, an assignment strategy is needed to determine, for each customer, which node should be contacted for each video file. While this problem is categorized in the general group of network optimization problems, its specific characteristics demand a new solution to be sought for it. In this paper, inspired by the success of fuzzy optimization for similar problems in coding, a fuzzy objective function is derived which is heuristically shown to minimize the communication cost in a VoD network, while controlling the storage cost as well. Then, an iterative algorithm is proposed to find an optimum solution to the proposed problem. After addressing the mathematical details of the proposed method, a sample problem is presented followed by the solution produced for it by the proposed method. This solution is then extensively analyzed.

This paper presents two lifetime models that describe two of the most common modes of operation of sensor nodes today, trigger-driven and duty-cycle driven. The models use a set of hardware parameters such as power consumption per task,... more

This paper presents two lifetime models that describe two of the most common modes of operation of sensor nodes today, trigger-driven and duty-cycle driven. The models use a set of hardware parameters such as power consumption per task, state transition overheads, and communication cost to compute a node’s average lifetime for a given event arrival rate. Through comparison of the two models and a case study from a real camera sensor node design we show how the models can be applied to drive architectural decisions, compute energy budgets and duty-cycles, and to preform side-by-side comparison of different platforms.

In this paper, we explore the problem of scheduling parallel programs using task duplication for messagepassing multicomputers. Task duplication means scheduling a parallel program by redundantly executing some of the tasks on which other... more

In this paper, we explore the problem of scheduling parallel programs using task duplication for messagepassing multicomputers. Task duplication means scheduling a parallel program by redundantly executing some of the tasks on which other tasks of the program critically depend. This can reduce the start times of tasks waiting for messages from tasks residing in other processors. There have been a few scheduling algorithms using task duplication. We discuss two such previously reported algorithms and describe their differences, limitations and suitability for different environments. A new algorithm is proposed which outperforms both of these algorithms, and is more efficient for low as well as high values of communication-to-computation ratios. The algorithm takes into account arbitrary computation and communication costs. All three algorithms are tested by scheduling some of the commonly encountered graph structures.

Query processing is an important concern in the field of distributed databases. The main problem is: if a query can be decomposed into subqueries that require operations at geographically separated databases, determine the sequence and... more

Query processing is an important concern in the field of distributed databases. The main problem is: if a query can be decomposed into subqueries that require operations at geographically separated databases, determine the sequence and the sites for performing this set of operations such that the operating cost (communication cost and processing cost) for processing this query is minimized. The problem is complicated by the fact that query processing not only depends on the operations of the query, but also on the parameter values associated with the query. Distributed query processing is an important factor in the overall performance of a distributed database system.

The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and... more

The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and other supercomputers. Users of Cplant and other Sandia supercomputers submit parallel jobs to a job queue. When a job is scheduled to run, it is

We prove lower bounds for the direct sum problem for two-party bounded error randomised multipleround communication protocols. Our proofs use the notion of information cost of a protocol, as defined by Chakrabarti et al. and refined... more

We prove lower bounds for the direct sum problem for two-party bounded error randomised multipleround communication protocols. Our proofs use the notion of information cost of a protocol, as defined by Chakrabarti et al. and refined further by . Our main technical result is a 'compression' theorem saying that, for any probability distribution £ over the inputs, a ¤ -round private coin bounded error protocol for a function ¥ with information cost ¦ can be converted into a ¤ round deterministic protocol for ¥ with bounded distributional error and communication cost § © ¤ ¦

1063-9535/94 $4.00 0 1994 IEEE

The authors extend the concept of coterie into k-coterie for k entries to a critical section. A structure named Cohorts is proposed to construct quorums in a k-coterie. The solution is resilient to node failures and/or network... more

The authors extend the concept of coterie into k-coterie for k entries to a critical section. A structure named Cohorts is proposed to construct quorums in a k-coterie. The solution is resilient to node failures and/or network partitioning and has a low communication cost. The Cohorts structure is further improved to increase the availabilities of 1-entry critical sections

Clustering sensor nodes is an effective topology control for increasing network lifetime and scalability. It also, balances the load on the sensor nodes. HEED is a well known distributed clustering protocol that uses both energy and... more

Clustering sensor nodes is an effective topology control for increasing network lifetime and scalability. It also, balances the load on the sensor nodes. HEED is a well known distributed clustering protocol that uses both energy and communication cost to elect Cluster Heads (CHs) in a probabilistic way. This paper improves HEED protocol using fuzzy logic and a non probabilistic approach for CH election. Fuzzy logic control is capable of making wisely decisions from blending different environmental parameters. The protocol uses node's remaining energy and two fuzzy descriptors, node degree and node centrality, to elect CHs. On the other hand, the probabilistic model of HEED for electing tentative CHs is eliminated by introducing delay, inversely proportional to the node's residual energy, for each node. Therefore, tentative CHs are selected according to their priority. Simulation results demonstrate that our approach performs better than HEED in terms of network lifetime and also cluster formation.

This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information... more

This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information transmitted by individual nodes (communication cost dominates power usage in sensor networks), and to improve quality of data when not all sensor readings can be propagated up the network within a given time constraint. TiNA was evaluated against a traditional in-network aggregation ...

A localized Delaunay triangulation owns the following interesting properties in a wireless ad hoc setting: it can be built with localized information, the communication cost imposed by control information is limited and it supports... more

A localized Delaunay triangulation owns the following interesting properties in a wireless ad hoc setting: it can be built with localized information, the communication cost imposed by control information is limited and it supports geographical routing algorithms that offer guaranteed convergence. This paper presents a localized algorithm that builds a graph called planar localized Delaunay triangulation, P LDel, known to be a good spanner of the unit disk graph, U DG. Unlike previous work, our algorithm builds P LDel in a single communication step, maintaining a communication cost of O(n log n), which is within a constant of the optimum. This represents a significant practical improvement over previous algorithms with similar theoretical bounds. Furthermore, the small cost of our algorithm makes feasible to use P LDel in real systems, instead of the Gabriel or the Relative Neighborhood graphs, which are not good spanners of U DG. ⋆ This work was partially supported by LaSIGE and by the FCT project INDIQoS POSI/CHS/41473/2001 via POSI and FEDER funds.

Loops are a large source of parallelism for many numerical applications. An important issue in the parallel execution of loops is how to schedule them so that the workload is well balanced among the processors. Most existing loop... more

Loops are a large source of parallelism for many numerical applications. An important issue in the parallel execution of loops is how to schedule them so that the workload is well balanced among the processors. Most existing loop scheduling algorithms were designed for shared-memory multiprocessors, with uniform memory access costs. These approaches are not suitable for distributed-memory multiprocessors where data locality is a major concern and communication costs are high. This paper presents a new scheduling algorithm in which data locality is taken into account. Our approach combines both worlds, static and dynamic scheduling, in a two-level (overlapped) fashion. This way data locality is considered and communication costs are limited. The performance of the new algorithm is evaluated on a CM-5 message-passing distributed-memory multiprocessor.

The emerging location-detection devices together with ubiquitous connectivity have enabled a large variety of locationbased services (LBS). Unfortunately, LBS may threaten the users' privacy. K-anonymity cloaking the user location to... more

The emerging location-detection devices together with ubiquitous connectivity have enabled a large variety of locationbased services (LBS). Unfortunately, LBS may threaten the users' privacy. K-anonymity cloaking the user location to Kanonymizing spatial region (K-ASR) has been extensively studied to protect privacy in LBS. Traditional K-anonymity method needs complex query processing algorithms at the server side. SpaceTwist [8] rectifies the above shortcoming of traditional Kanonymity since it only requires incremental nearest neighbor (INN) queries processing techniques at the server side. However, SpaceTwist may fail since it cannot guarantee K-anonymity. In this paper, our proposed framework, called KAWCR (Kanonymity Without Cloaked Region), rectifies the shortcomings and retains the advantages of the above two techniques. KAWCR only needs the server to process INN queries and can guarantee that the users issuing the query is indistinguishable from at least K-1 other users. We formulate the communication costs of KAWCR, traditional K-anonymity and SpaceTwist under the assumptions that POIs and users are uniformly distributed. We also did extensive experiments to compare KAWCR with traditional K-anonymity and SpaceTwist in terms of communication costs. The experimental results show that the communication cost of KAWCR for kNN queries is lower than that of both traditional K-anonymity and SpaceTwist.