Communication Cost Research Papers - Academia.edu (original) (raw)
2025
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath... more
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath routing protocol must account for both the failure probability of a single path and link sharing among multiple paths. We propose a practical solution that exploits physical topology information and end-to-end path quality measurement results to select high quality path pairs. Simulation results show the proposed approach is effective in achieving higher multipath reliability in overlay networks at reasonable communication cost.
2025, 25th IEEE International Conference on Distributed Computing Systems Workshops
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath... more
Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath routing protocol must account for both the failure probability of a single path and link sharing among multiple paths. We propose a practical solution that exploits physical topology information and end-to-end path quality measurement results to select high quality path pairs. Simulation results show the proposed approach is effective in achieving higher multipath reliability in overlay networks at reasonable communication cost.
2025, International Journal of Computer Science and Network Security
In wireless ad hoc environments, two approaches can be used for multicasting: multicast flooding or multicast tree-based approach. Existing multicast protocols mainly based on the latter approach, may not work properly in mobile ad hoc... more
In wireless ad hoc environments, two approaches can be used for multicasting: multicast flooding or multicast tree-based approach. Existing multicast protocols mainly based on the latter approach, may not work properly in mobile ad hoc networks as dynamic movement of group members can cause the frequent tree reconfiguration with excessive channel overhead and resulting into loss of datagram. Since the task of keeping the tree structure up-to-date in the multicast tree-based approach is nontrivial, sometimes, multicast flooding is considered as an alternative approach for multicasting in MANET. The scheme presented in this research attempts to reduce the forwarding space for multicast packets beyond earlier presented scheme and also examine the effect of our improvements upon control packet overhead, data packet delivery ratio, and end-to-end delay by further reduction in the number of nodes that rebroadcasts multicast packets while still maintaining a high degree of accuracy of delivered packets. The simulated result was carried out with OMNeT++ to present the comparative analysis on the performance of angular scheme with flooding and LAR box scheme. Our result showed a better improvement compared to flooding and LAR box schemes.
2025, IEEE Communications Magazine
Mobile multimedia applications require networks that optimally allocate resources and adapt to dynamically changing environments. Cross-layer design (CLD) is a new paradigm that addresses this challenge by optimizing communication network... more
Mobile multimedia applications require networks that optimally allocate resources and adapt to dynamically changing environments. Cross-layer design (CLD) is a new paradigm that addresses this challenge by optimizing communication network architectures across traditional layer boundaries. In this article we discuss the relevant technical challenges of CLD and focus on application-driven CLD for video streaming over wireless networks. We propose a cross-layer optimization strategy that jointly optimizes the application layer, data link layer, and physical layer of the protocol stack using an applicationoriented objective function in order to maximize user satisfaction. In our experiments we demonstrate the performance gain achievable with this approach. We also explore the trade-off between performance gain and additional computation and communication cost introduced by crosslayer optimization. Finally, we outline future research challenges in CLD.
2025
We address in this paper the combination of static and dynamic scheduling into an approach called quasi-static scheduling, in the context of real-time systems composed of hard and soft tasks. For the particular problem discussed in this... more
We address in this paper the combination of static and dynamic scheduling into an approach called quasi-static scheduling, in the context of real-time systems composed of hard and soft tasks. For the particular problem discussed in this paper, a single static schedule is too pessimistic while a purely dynamic scheduling approach causes a very high on-line overhead. In the proposed quasi-static solution we compute at design-time a set of schedules, and leave for run-time only the selection of a particular schedule based on the actual execution times. We propose an exact algorithm as well as heuristics that tackle the time and memory complexity of the problem. The approach is evaluated through synthetic examples.
2025
Some of the algorithms developed within the artificial neural-networks tradition can be easily adopted to wireless sensor network platforms and will meet the requirements for sensor networks like: simple parallel distributed computation,... more
Some of the algorithms developed within the artificial neural-networks tradition can be easily adopted to wireless sensor network platforms and will meet the requirements for sensor networks like: simple parallel distributed computation, distributed storage, data robustness and auto-classification of sensor readings. As a result of the dimensionality reduction obtained simply from the outputs of the neural-networks clustering algorithms, lower communication costs and energy savings can also be obtained. In the paper we will propose three different kinds of architectures for incorporating the ART and FuzzyART artificial neural networks into the small Smart-It units' network. We will also give some results of the classifications of real-world data obtained with a sensor network of 5 Smart-It units, each equipped with 6 different types of sensors. We will also give results from the simulations where we have purposefully made one of the input sensors malfunctioning, giving zero or r...
2025, 10th IEEE Symposium on Computers and Communications (ISCC'05)
Some of the algorithms developed within the artificial neural-networks tradition can be easily adopted to wireless sensor network platforms and will meet the requirements for sensor networks like: simple parallel distributed computation,... more
Some of the algorithms developed within the artificial neural-networks tradition can be easily adopted to wireless sensor network platforms and will meet the requirements for sensor networks like: simple parallel distributed computation, distributed storage, data robustness and autoclassification of sensor readings. As a result of the dimensionality reduction obtained simply from the outputs of the neural-networks clustering algorithms, lower communication costs and energy savings can also be obtained. In the paper we will propose three different kinds of architectures for incorporating the ART and FuzzyART artificial neural networks into the small Smart-It units' network. We will also give some results of the classifications of real-world data obtained with a sensor network of 5 Smart-It units, each equipped with 6 different types of sensors. We will also give results from the simulations where we have purposefully made one of the input sensors malfunctioning, giving zero or random signal, in order to show the data robustness of our approach.
2025, Review of Economics and Statistics
We study the price adjustment practices and provide quantitative measurement of the managerial and customer costs of price adjustment using data from a large U.S. industrial manufacturer and its customers. We nd that price adjustment... more
We study the price adjustment practices and provide quantitative measurement of the managerial and customer costs of price adjustment using data from a large U.S. industrial manufacturer and its customers. We nd that price adjustment costs are a much more complex construct than the existing industrial-organization or macroeconomics literature recognizes. In addition to physical costs (menu costs), we identify and measure three types of managerial costs (information gathering, decision-making, and communication costs) and two types of customer costs (communication and negotiation costs). We nd that the managerial costs are more than 6 times, and customer costs are more than 20 times, the menu costs. In total, the price adjustment costs comprise 1.22% of the company's revenue and 20.03% of the company's net margin. We show that many components of the managerial and customer costs are convex, whereas the menu costs are not. We also document the link between price adjustment costs and price rigidity. Finally, we provide evidence of managers' fear of antagonizing customers. I have no answer to the question of how to measure these menu change costs, but these [menu cost] theories will never be taken seriously until an answer is provided. Edward Prescott (1987, p. 113) Given the large number of theoretical papers that evaluate the implications of [price] adjustment costs, obtaining direct evidence that such costs are present seems crucial. Margaret Slade (1998, p. 104)
2025
In this paper, a simple and efficient low complexity fast converging partial update normalized LMS (PNLMS) algorithm is proposed for the decision feedback equalization. The proposed implementation is suitable for applications requiring... more
In this paper, a simple and efficient low complexity fast converging partial update normalized LMS (PNLMS) algorithm is proposed for the decision feedback equalization. The proposed implementation is suitable for applications requiring long adaptive equalizers, as is the case in several high-speed wireless communication systems. The proposed algorithm yields good bit error rate performance over a reasonable signal to noise ratio. In each iteration, without reducing the order of the filter, only a part of the filter coefficients are updated so that it reduces the computational complexity and improves speed of operation. The NLMS algorithm can be considered as a special case and slightly improved version of the LMS algorithm which takes into account the variation in the signal level at the filter output and selects a normalized step size parameter which results in a stable as well as fast converging adaptive algorithm. The frequency domain representation facilitates, easier to choose step size with which the proposed algorithm convergent in the mean squared sense, whereas in the time domain it requires the information on the largest eigen value of the correlation matrix of the input sequence. Simulation studies shows that the proposed realization gives better performance compared to existing realizations in terms of convergence rate.
2025, Information Security Conference/Information Security Workshop
Cooperative defensive systems communicate and cooperate in their response to worm attacks, but determine the presence of a worm attack solely on local information. Distributed worm detection and im- munization systems track suspicious... more
Cooperative defensive systems communicate and cooperate in their response to worm attacks, but determine the presence of a worm attack solely on local information. Distributed worm detection and im- munization systems track suspicious behavior at multiple cooperating nodes to determine whether a worm attack is in progress. Earlier work has shown that cooperative systems can respond quickly to day-zero worms,
2025, Lecture Notes in Computer Science
Two provably secure group identification schemes are presented in this report: 1) we extend De Santis, Crescenzo and Persiano's (SCP) anonymous group identification scheme to the discrete logarithm based case; then we provide a 3-move... more
Two provably secure group identification schemes are presented in this report: 1) we extend De Santis, Crescenzo and Persiano's (SCP) anonymous group identification scheme to the discrete logarithm based case; then we provide a 3-move anonymous group identification scheme, which is more efficient than that presented in [SCPM, CDS], with the help of this basic scheme; 2) we also extend the original De Santis, Crescenzo and Persiano anonymous group identification scheme to the general case where each user holds public key which is chosen by herself independently. The communication cost for one round execution of the protocol is 2mk, where k is bit length of public key n and m is the number of users in the group.
2025, Canadian Journal of Regional Science
In this brief paper I outline some of Mario Polèse’s work in regional economics and local development that has marked my career and which has inspired much of my own research -, first as his student, then as his colleague. Notwithstanding... more
In this brief paper I outline some of Mario Polèse’s work in regional economics and local development that has marked my career and which has inspired much of my own research -, first as his student, then as his colleague. Notwithstanding the importance of his work, I try to point out the other – probably more important – ways in which Mario Polèse has influenced me, and – I suspect – other students and colleagues. This influence comes more from his integrity, fearlessness and humanism than from any specific idea or paper – though these qualities also permeate his writing on regional economies and cities.
2025, 2007 10th International Conference on Information Fusion
A cooperative team's performance strongly depends on the view that the team has of the environment in which it operates. In a team with many autonomous vehicles and many sensors, there is a large volume of information available from which... more
A cooperative team's performance strongly depends on the view that the team has of the environment in which it operates. In a team with many autonomous vehicles and many sensors, there is a large volume of information available from which to create that view. However, typically communication bandwidth limitations prevent all sensor readings being shared with all other team members. This paper presents three policies for sharing information in a large team that balance the value of information against communication costs. Analytical and empirical evidence of their effectiveness is provided. The results show that using some easily obtainable probabilistic information about the team dramatically improves overall belief sharing performance. Specifically, by collectively estimating the value of a piece of information, the team can make most efficient use of its communication resources.
2025, Distributed Computing
2025, Lecture Notes in Computer Science
The concept of safe region has been used to reduce the computation and communication cost for the continuous monitoring of k nearest neighbor (kNN) queries. A safe region is an area such that as long as a query remains in it, the set of... more
The concept of safe region has been used to reduce the computation and communication cost for the continuous monitoring of k nearest neighbor (kNN) queries. A safe region is an area such that as long as a query remains in it, the set of its kNNs does not change. In this paper, we present an efficient technique to construct the safe region by using cheap RangeNN queries. We also extend our approach for dynamic datasets (the objects may appear or disappear from the dataset). Our proposed algorithm outperforms existing algorithms and scales better with the increase in k.
2025
In recent years, the application of WSNs has been remarkably increased and notable developments and advances have been achieved in this regard. In particular, thanks to smart, cheaper and smaller nodes, different types of information can... more
In recent years, the application of WSNs has been remarkably increased and notable developments and advances have been achieved in this regard. In particular, thanks to smart, cheaper and smaller nodes, different types of information can be detected and gathered in different environments and under different conditions. As the popularity of WSNs has increased, the problems and issues related to networks are examined and investigated. As a case in point, routing issue is one of the main challenges in this regard which has a direct impact on the performance of sensor networks. In WSN routing, sensor nodes send and receive great amounts of information. As a result, such a system may use lots of energy which may reduce network lifetime. Given the limited power of a battery, certain method and approaches are needed for optimizing power consumption. One such approach is to cluster sensor nodes; however, improper clustering increases the load imposed on the clusters around the sink. Hence, for proper clustering, smart algorithms need to be used. Accordingly, in this paper, a novel algorithm, namely social spider optimization (SSO) algorithm is proposed for clustering sensor network. It is based on the simulation of the social cooperative behavior of spiders. In the proposed algorithm, nodes imitate a group of spiders who interact with each other according to biological rules of colony. Furthermore, fuzzy logic based on the two criteria of battery level and distance to sink is used for determining the fitness of nodes. On the other hand in WSNs with a fixed sink, since the nodes near the sink share multi-hop routes and data and integrated towards the sink, these nodes are more likely to deplete their battery energy than other nodes of the network. Also In this paper, mobile sink was suggested for dealing with this problem. For investigating and demonstrating the performance of the proposed method, we compared it with DCRRP and NODIC protocol. The results of simulation indicated better performance of the proposed method in terms of power consumption, throughput rate, end-to-end delay and signal to noise ratio and has higher failure tolerance especially in terms of sensor nodes' failure.
2025, Computers & Operations Research
This paper presents a redundant multicast routing problem in multilayer networks that arises from large-scale distribution of realtime multicast data (e.g., Internet TV, videocasting, online games, stock quotes). Since these multicast... more
This paper presents a redundant multicast routing problem in multilayer networks that arises from large-scale distribution of realtime multicast data (e.g., Internet TV, videocasting, online games, stock quotes). Since these multicast services commonly operate in multilayer networks, the communications paths need to be robust against a single router or link failure as well as multiple such failures due to shared risk link groups (SRLGs). The main challenge of this multicast is to ensure the service availability and reliability using a path protection scheme, which is to find a redundant path that is SRLG-disjoint (diverse) from each working path. The objective of this problem is, therefore, to find two redundant multicast trees, each from one of the two redundant sources to every destination, at a minimum total communication cost whereas two paths from the two sources to every destination are guaranteed to be SRLG-diverse (i.e., links in the same risk group are disjoint). In this paper, we present two new mathematical programming models, edge-based and path-based, for the redundant multicast routing problem with SRLG-diverse constraints. Because the number of paths in path-based model grows exponentially with the network size, it is impossible to enumerate all possible paths in real life networks. We develop three approaches (probabilistic, non-dominated and nearly non-dominated) to generate potentially good paths that may be included in the path-based model. This study is motivated by emerging applications of internet-protocol TV service, and we evaluate the proposed approaches using real life network topologies. Our empirical results suggest that both models perform very well, and the nearly non-dominated path approach outperforms all other path generation approaches.
2025, … Mathematics and its …
2025, Applied Mathematics and Computation
An efficient signcryption scheme based on elliptic curve is proposed in this paper. The signcryption scheme combines digital signature and encryption functions. The proposed scheme takes lower computation and communication cost to provide... more
An efficient signcryption scheme based on elliptic curve is proposed in this paper. The signcryption scheme combines digital signature and encryption functions. The proposed scheme takes lower computation and communication cost to provide security functions. It not only provides message confidentiality, authentication, integrity, unforgeability, and non-repudiation, but also forward secrecy for message confidentiality and public verification. In the proposed scheme, the judge can verify senderÕs signature directly without the senderÕs private key when dispute occurs. Our scheme can be applied to mobile communication environment more efficiently because of the low computation and communication cost.
2025
The trend for remote sensing satellite missions has always been towards smaller size, lower cost, more flexibility, and higher computational power. Reconfigurable Computers (RCs) combine the flexibility of traditional microprocessors with... more
The trend for remote sensing satellite missions has always been towards smaller size, lower cost, more flexibility, and higher computational power. Reconfigurable Computers (RCs) combine the flexibility of traditional microprocessors with the power of Field Programmable Gate ...
2025, Social Science Research Network
This paper develops a model of an economy with clubs where individuals may belong to multiple clubs and where there may be ever increasing returns to club size. Clubs may be large, as large as the total agent set. The main condition... more
This paper develops a model of an economy with clubs where individuals may belong to multiple clubs and where there may be ever increasing returns to club size. Clubs may be large, as large as the total agent set. The main condition required is that sufficient wealth can compensate for memberships in larger and larger clubs. Notions of price taking equilibrium and the core, both with communication costs, are introduced. These notions require that there is a small cost, called a communication cost, of deviating from a given outcome. With some additional standard sorts of assumptions
2025, SSRN Electronic Journal
This paper develops a model of an economy with clubs where individuals may belong to multiple clubs and where there may be ever increasing returns to club size. Clubs may be large, as large as the total agent set. The main condition... more
This paper develops a model of an economy with clubs where individuals may belong to multiple clubs and where there may be ever increasing returns to club size. Clubs may be large, as large as the total agent set. The main condition required is that sufficient wealth can compensate for memberships in larger and larger clubs. Notions of price taking equilibrium and the core, both with communication costs, are introduced. These notions require that there is a small cost, called a communication cost, of deviating from a given outcome. With some additional standard sorts of assumptions
2025, Volume graphics ...
2025, Eurographics Workshop on Parallel Graphics and Visualization
We present a new parallel multiresolution volume rendering algorithm for visualizing large data sets. Using the wavelet transform, the raw data is first converted into a multiresolution wavelet tree. To eliminate the parent-child data... more
We present a new parallel multiresolution volume rendering algorithm for visualizing large data sets. Using the wavelet transform, the raw data is first converted into a multiresolution wavelet tree. To eliminate the parent-child data dependency for reconstruction and achieve load-balanced rendering, we design a novel algorithm to partition the tree and distribute the data along a hierarchical space-filling curve with error-guided bucketization. At run time, the wavelet tree is traversed according to the user-specified error tolerance, data blocks of different resolutions are decompressed and rendered to compose the final image in parallel. Experimental results showed that our algorithm can reduce the run-time communication cost to a minimum and ensure a well-balanced workload among processors when visualizing gigabytes of data with arbitrary error tolerances.
2025
Extracting only the visible portion of an isosurface can improve both the computation efficiency and the rendering speed. However, the visibility test overhead can be quite high for large scale data sets. In this paper, we present a... more
Extracting only the visible portion of an isosurface can improve both the computation efficiency and the rendering speed. However, the visibility test overhead can be quite high for large scale data sets. In this paper, we present a view-dependent isosurface extraction algorithm utilizing occlusion query hardware to accelerate visible isosurface extraction. A spherical partition scheme is proposed to traverse the data blocks in a layered front-to-back order. Such traversal order helps our algorithm to identify the visible isosurface blocks more quickly with fewer visibility queries. Our algorithm can compute a more complete isosurface in a smaller amount of time, and thus is suitable for time-critical visualization applications.
2025, Fourth International Workshop on Volume Graphics, 2005.
We present a new parallel multiresolution volume rendering framework for large-scale time-varying data visualization using the wavelet-based time-space partitioning (WTSP) tree. Utilizing the wavelet transform, a largescale time-varying... more
We present a new parallel multiresolution volume rendering framework for large-scale time-varying data visualization using the wavelet-based time-space partitioning (WTSP) tree. Utilizing the wavelet transform, a largescale time-varying data set is converted into a space-time multiresolution data hierarchy, and is stored in a timespace partitioning (TSP) tree. To eliminate the parent-child data dependency for reconstruction and achieve loadbalanced rendering, we design an algorithm to partition the WTSP tree and distribute the wavelet-compressed data along hierarchical space-filling curves with error-guided bucketization. At run time, the WTSP tree is traversed according to the user-specified time step and tolerances of both spatial and temporal errors. Data blocks of different spatio-temporal resolutions are reconstructed and rendered to compose the final image in parallel. We demonstrate that our algorithm can reduce the run-time communication cost to a minimum and ensure a well-balanced workload among processors when visualizing gigabytes of time-varying data on a PC cluster.
2025, Concepts, Methodologies, Tools, and Applications
This chapter shows how flexibility can be realized for distributed workflows. The capability to dynamically adapt workflow instances during runtime (e.g., to add, delete or move activities) constitutes a fundamental challenge for any... more
This chapter shows how flexibility can be realized for distributed workflows. The capability to dynamically adapt workflow instances during runtime (e.g., to add, delete or move activities) constitutes a fundamental challenge for any workflow management system (WfMS). While there has been significant research on ad-hoc workflow changes and on related correctness issues, there exists only little work on how to provide respective runtime flexibility in an enterprise-wide context as well. Here, scalability at the presence of high loads constitutes an essential requirement, often necessitating distributed (i.e., piecewise) control of a workflow instance by different workflow servers, which should be as independent from each other as possible. This chapter presents advanced concepts and techniques for enabling ad-hoc workflow changes in a distributed WfMS as well. Our focus is on minimizing the communication costs among workflow servers, while ensuring a correct execution behavior as wel...
2025
Assessing the performance of peer-to-peer algorithms is impossible without simulations since testing new algorithms by deploying them in an existing P2P network is prohibitively expensive. However, some P2P algorithms are sensitive to the... more
Assessing the performance of peer-to-peer algorithms is impossible without simulations since testing new algorithms by deploying them in an existing P2P network is prohibitively expensive. However, some P2P algorithms are sensitive to the network and traffic models that are used in the simulations. In order to produce realistic results, we therefore require simulations that resemble real-world P2P networks as closely as possible. We describe the Query-Cycle Simulator, a simulator for file-sharing P2P networks networks. We link the Query-Cycle Simulator to measurements on existing P2P networks and discuss some open issues in simulating these networks.
2025, Rule Representation, Interchange and Reasoning on the Web
2025, Lecture Notes in Computer Science
2025, IEEE Wireless Communications and Networking Conference, 2006. WCNC 2006.
2025
Efficient utilization of the computational resources is an urgent demand especially if real time constraints should be guaranteed. Moreover, an acceptable level of reliability should be provided due to the critical nature of some real... more
Efficient utilization of the computational resources is an urgent demand especially if real time constraints should be guaranteed. Moreover, an acceptable level of reliability should be provided due to the critical nature of some real time applications. This paper proposes a new approach for processing power reservations that efficiently utilizes all the available processing power to improve reliability and schedulability of independent real time tasks on a uniprocessor. The basic idea of the proposed approach is to use all of the available processing power in the time interval between the arrivals of two successive tasks or when an existing task departs. The advantages of this mechanism are: 1) it reduces the execution time required for each task and hence increases its reliability. 2) At the arrival of a new task; the processing power requirements for the executing tasks to meet their deadlines are smaller, which gives the new arriving tasks higher chances to be accommodated with the existing ones. 3) Efficient processing power utilization may reduce the power consumption in processors with dynamic voltage scaling. An illustration example and simulation experiments showed that our approach provides a more reliable scheduling scheme with higher acceptance rate compared to the traditional approach based on Rialto operating system.
2025
The paper presents a heuristic algorithm to schedule real time indivisible loads represented as directed sequential task graph on a cluster computing. One of the cluster nodes has some special resources (denoted by special node) that may... more
The paper presents a heuristic algorithm to schedule real time indivisible loads represented as directed sequential task graph on a cluster computing. One of the cluster nodes has some special resources (denoted by special node) that may be needed by one of the indivisible loads' tasks (denoted by special task). Most previous scheduling algorithms assign the indivisible load as a single unit to one of the cluster nodes. Using this scheduling strategy may get the special node overloaded and the system may not be able to accommodate arriving loads although other nodes are unloaded. The proposed scheduler explores the task graph of the indivisible load and assigns the special task to the special node if there is enough workload to accommodate it. The other tasks are assigned to the other processing nodes subject to several predefined criteria which are satisfying the real time requirements, minimizing both of the communication cost and context switching overheads.
2025, cs.ucf.edu
Abstract: BitTorrent has shown to be efficient for bulk file transfer, however, it is susceptible to free riding by strategic clients like BitTyrant. Strategic peers configure the client software such that for very less or no... more
Abstract: BitTorrent has shown to be efficient for bulk file transfer, however, it is susceptible to free riding by strategic clients like BitTyrant. Strategic peers configure the client software such that for very less or no contribution, they can obtain good download speeds. Such ...
2024, Journal of Cryptology
A Distributed Key Generation (DKG) protocol is an essential component of threshold cryptosystems required to initialize the cryptosystem securely and generate its private and public keys. In the case of discrete-log-based (dlog-based)... more
A Distributed Key Generation (DKG) protocol is an essential component of threshold cryptosystems required to initialize the cryptosystem securely and generate its private and public keys. In the case of discrete-log-based (dlog-based) threshold signature schemes (ElGamal and its derivatives), the DKG protocol is further used in the distributed signature generation phase to generate one-time signature randomizers (r = g k ). In this paper we show that a widely used dlog-based DKG protocol suggested by Pedersen does not guarantee a uniformly random distribution of generated keys: we describe an efficient active attacker controlling a small number of parties which successfully biases the values of the generated keys away from uniform. We then present a new DKG protocol for the setting of dlog-based cryptosystems which we prove to satisfy the security requirements from DKG protocols and, in particular, it ensures a uniform distribution of the generated keys. The new protocol can be used as a secure replacement for the many applications of Pedersen's protocol. Motivated by the fact that the new DKG protocol incurs additional communication cost relative to Pedersen's original protocol, we investigate whether the latter can be used in specific applications which require relaxed security properties from the DKG
2024, 2013 IEEE International Conference on Cluster Computing (CLUSTER)
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or... more
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
2024, Information Sciences
In this paper, we consider several mathematical and algorithmic problems which arise naturally in the optimal deployment of modern network management systems. Specifically, we will consider the problem of minimizing the total... more
In this paper, we consider several mathematical and algorithmic problems which arise naturally in the optimal deployment of modern network management systems. Specifically, we will consider the problem of minimizing the total communication costs within an architecture consisting of a distributed hierarchy of cooperating intelligent agents. We consider several communication cost models, and describe provable optimal schemes for distributing agents among machines in each of these models.
2024, 2013 International Conference on Machine Intelligence and Research Advancement
2024, Engineering Applications of Artificial Intelligence
Since optical WDM networks are becoming one of the alternatives for building up backbones, dynamic routing, and wavelength assignment with delay constraints (DRWA-DC) in WDM networks with sparse wavelength conversions is important for a... more
Since optical WDM networks are becoming one of the alternatives for building up backbones, dynamic routing, and wavelength assignment with delay constraints (DRWA-DC) in WDM networks with sparse wavelength conversions is important for a communication model to route requests subject to delay bounds. Since the NP-hard minimum Steiner tree problem can be reduced to the DRWA-DC problem, it is very unlikely to derive optimal solutions in a reasonable time for the DRWA-DC problem. In this paper, we circumvent to apply a meta-heuristic based upon the ant colony optimization (ACO) approach to produce approximate solutions in a timely manner. In the literature, the ACO approach has been successfully applied to several well-known combinatorial optimization problems whose solutions might be in the form of paths on the associated graphs. The ACO algorithm proposed in this paper incorporates several new features so as to select wavelength links for which the communication cost and the transmission delay of routing the request can be minimized as much as possible subject to the specified delay bound. Computational experiments are designed and conducted to study the performance of the proposed algorithm. Comparing with the optimal solutions found by an ILP formulation, numerical results evince that the ACO algorithm is effective and robust in providing quality approximate solutions to the DRWA-DC problem.
2024, Yale Journal on Regulation
The importance of organizational form in American medicine has been the subject of much debate. But the character of the debate-the nonprofit form versus its competitors-has been sufficiently confused that much of the controversy should... more
The importance of organizational form in American medicine has been the subject of much debate. But the character of the debate-the nonprofit form versus its competitors-has been sufficiently confused that much of the controversy should be reconsidered. That debate has been both ideological (commercialism and profit versus service and professionalism) and practical (which form is more efficient)? The challenge of public policy is to adapt public rules to the central realities of American medicine, not the shibboleths of shrill discourse. In the case of medicine, factors other than the form of legal ownership-among them, the nature of the service provided, the developmental stage of the service, the role of physicians in providing the service, and the nature of government regulation-are more important in fashioning those appropriate responses. The radical transformation that has been occuring in American medicine is not substantially explained by changes in organizational form. What we are witnessing is a shift in the character of American medicine-a rise of commercialism and a decline of a professional ethos-that cuts across organizational forms. Ownership-based policies alone cannot reverse this trend. A restoration of the fundamental values of caring, historically associated with charitable nonprofit institutions but endangered by growth and depersonalization of the medical industry, is
2024, Research Policy
Bibliographic data and classifications of all the ERIM reports are also available on the ERIM website: www.erim.eur.nl ERASMUS RESEARCH INSTITUTE OF MANAGEMENT REPORT SERIES RESEARCH IN MANAGEMENT
2024, ArXiv
Privacy has been a major motivation for distributed problem optimization. However, even though several methods have been proposed to evaluate it, none of them is widely used. The Distributed Constraint Optimization Problem (DCOP) is a... more
Privacy has been a major motivation for distributed problem optimization. However, even though several methods have been proposed to evaluate it, none of them is widely used. The Distributed Constraint Optimization Problem (DCOP) is a fundamental model used to approach various families of distributed problems. As privacy loss does not occur when a solution is accepted, but when it is proposed, privacy requirements cannot be interpreted as a criteria of the objective function of the DCOP. Here we approach the problem by letting both the optimized costs found in DCOPs and the privacy requirements guide the agents' exploration of the search space. We introduce Utilitarian Distributed Constraint Optimization Problem (UDCOP) where the costs and the privacy requirements are used as parameters to a heuristic modifying the search process. Common stochastic algorithms for decentralized constraint optimization problems are evaluated here according to how well they preserve privacy. Furthe...
2024
Advanced array processing techniques are becoming an indispensable requirement for integrating the rapid developments in wireless high-density electronic interfaces to the Central Nervous System (CNS) with computational neuroscience. This... more
Advanced array processing techniques are becoming an indispensable requirement for integrating the rapid developments in wireless high-density electronic interfaces to the Central Nervous System (CNS) with computational neuroscience. This work aims at describing a systems approach for data compression to enable real-time transmission of high volumes of neural data acquired by implantable microelectrode arrays to extra-cutaneous devices. We show that the tradeoff between transmission bit rate and processing complexity requires a smart coding mechanism to yield a fast and efficient neural interface capable of transmitting the information from the CNS in realtime without compromising issues of communication bandwidth and signal fidelity. The results presented demonstrate that onchip coding offers tremendous savings in communication costs compared to raw data transmission for off-chip analysis. Performance illustrations and experimental neural data examples are described in details. I.
2024
We study the problem of applying adaptive filters for approximate query processing in a distributed stream environment. We propose filter bound assignment protocols with the objective of reducing communication cost. Most previous works... more
We study the problem of applying adaptive filters for approximate query processing in a distributed stream environment. We propose filter bound assignment protocols with the objective of reducing communication cost. Most previous works focus on value-based queries (e.g., average) with numerical error tolerance. In this paper, we cover entity-based queries (e.g., a nearest neighbor query returns object names rather than a single value). In particular, we study non-value-based tolerance (e.g., the answer to the nearest-neighbor query should rank third or above). We investigate different non-value-based error tolerance definitions and discuss how they are applied to two classes of entity-based queries: non-rankbased and rank-based queries. Extensive experiments show that our protocols achieve significant savings in both communication overhead and server computation.
2024, Very Large Data Bases
We study the problem of applying adaptive filters for approximate query processing in a distributed stream environment. We propose filter bound assignment protocols with the objective of reducing communication cost. Most previous works... more
We study the problem of applying adaptive filters for approximate query processing in a distributed stream environment. We propose filter bound assignment protocols with the objective of reducing communication cost. Most previous works focus on value-based queries (e.g., average) with numerical error tolerance. In this paper, we cover entity-based queries (e.g., nearest neighbor) with non-value-based error tolerance. We investigate different nonvalue-based error tolerance definitions and discuss how they are applied to two classes of entity-based queries: non-rank-based and rank-based queries. Extensive experiments show that our protocols achieve significant savings in both communication overhead and server computation.
2024, ACR North American Advances
Retailers frequently use sales promotion tools as a part of their marketing effort. The market for consumer-oriented activities, such as sweepstakes, discount coupons and free samples, has grown substantially in the last decade, reaching... more
Retailers frequently use sales promotion tools as a part of their marketing effort. The market for consumer-oriented activities, such as sweepstakes, discount coupons and free samples, has grown substantially in the last decade, reaching approximately $300 billion in the United States alone (Promo Trends Report, 2004). Given this huge volume, it is essential for companies and policy-makers to understand how individuals respond to different types of reward mechanisms. To our knowledge, experimental studies focusing on consumer response for promotional activities are quite scarce (see Ward and Hill, 1991; Prendergast et al, 2005; Chandran and Morwitz, 2006). The current study investigates the impact of time preferences on attitudes towards sales promotion tools. Our conjecture is that delayed promotions do not always induce the same kind of purchase behavior and that risk attitudes, time preferences and affective responses contribute to differences in consumer reactions. Sales promotion activities can take various forms with respect to the reward structure. A possible taxonomy is as follows:
2024, Extending Database Technology
A major concern when processing queries within a wireless sensor network is to minimize the energy consumption of the network nodes, thus extending the networks lifetime. One way to achieve this is by minimizing the amount of... more
A major concern when processing queries within a wireless sensor network is to minimize the energy consumption of the network nodes, thus extending the networks lifetime. One way to achieve this is by minimizing the amount of communication required to answer queries. In this paper we investigate exact continuous quantile queries, focusing on the particular case of the median query. Many recently proposed algorithms determine a quantile by performing a series of refining histogram queries. For that class of queries, we recently proposed a cost-model to estimate the optimal number of histogram buckets within an algorithm for minimizing the energy consumption of a query. In this paper, we extend that algorithm for continuous queries. Furthermore we also offer a new refinement-based algorithm that employs a heuristic to minimize the number of message transmissions. Our experiments, using synthetic and real datasets, show that despite its theoretical runtime complexity our heuristic solution is able to perform significantly better than histogram-based approaches.
2024, International Journal on Software Tools for Technology Transfer
SAT-based Bounded Model Checking (BMC), though a robust and scalable verification approach, still is computationally intensive, requiring large memory and time. Interestingly, with the recent development of improved SAT solvers, it is... more
SAT-based Bounded Model Checking (BMC), though a robust and scalable verification approach, still is computationally intensive, requiring large memory and time. Interestingly, with the recent development of improved SAT solvers, it is frequently the memory limitation of a single server rather than time that becomes a bottleneck for doing deeper BMC search. Distributing computing requirements of BMC over a network of workstations can overcome the memory limitation of a single server, albeit at increased communication cost. In this paper, we present: a) a method for distributed-SAT over a network of workstations using a Master/Client model where each client worsktation has an exclusive partition of the SAT problem and uses knowledge of partition topology to communicate with other Clients, b) a method for distributing SATbased BMC using the distributed-SAT. For the sake of scalability, at no point in the BMC computation does a single workstation have all the information. We experimented on a network of heterogenous workstations interconnected with a standard Ethernet LAN. To illustrate, on an industrial design with ~13K FFs and ~0.5M gates, the non-disributed BMC on a single workstation (with 4 Gb memory) ran out of memroy after reaching a depth of 120; on the otherhand, our SAT-based distributed BMC over 5 similar workstations was able to go upto 323 steps with a communication overhead of only 30%.