Hiroshi Matsuo - Academia.edu (original) (raw)
Papers by Hiroshi Matsuo
In this paper, we propose a Quantified Distributed Constraint Optimization problem (QDCOP) that e... more In this paper, we propose a Quantified Distributed Constraint Optimization problem (QDCOP) that extends the framework of Distributed Constraint Optimization problems (DCOPs). DCOPs have been studied as a fundamental model of multi-agent cooperation. In traditional DCOPs, all agents cooperate to optimize the sum of their cost functions. However, in practical systems some agents may desire to select the value of their variables without cooperation. In special cases, such agents may take the values with the worst impact on the quality of the result reachable by the optimization process. We apply existential/universal quantifiers to distinct uncooperative variables. A universally quantified variable is left unassigned by the optimization as the result has to hold when it takes any value from its domain, while an existentially quantified variable takes exactly one of its values for each context. Similar classes of problems have recently been studied as (Distributed) Quantified Constraint Problems, where the variables of the CSP have quantifiers. All constraints should be satisfied independently of the value taken by universal variables. We propose a QDCOP that applies the concept of game tree search to DCOP. If the original problem is a minimization problem, agents that own universally quantified variables may intend to maximize the cost value in the worst case. Other agents normally intend to optimize the minimizing problems. Therefore, only the bounds, especially the upper bounds, of the optimal value are guaranteed. The purpose of the new class of problems is to compute such bounds, as well as to compute sub-optimal solutions. For the QDCOP, we also propose several methods that are based on min-max/alpha-beta and ADOPT algorithms.
WSEAS TRANSACTIONS on COMMUNICATIONS archive, Feb 1, 2008
Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of... more Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of Internet traffic by some estimates. Recently, we have been witnessing the emergence of a new class of popular P2P applications, namely, P2P audio and video streaming. While traditional P2P file distribution applications target elastic data transfers, P2P streaming focuses on the efficient delivery of audio and video content under tight timing requirements. In these applications, each node independently selects some other nodes as its neighbors and exchanges streaming data with neighbors. In this paper, we propose and investigate a full distributed, scalable, and cooperative protocol for live video streaming in an overlay peer-to-peer network. Our protocol, termed P2P Unstructured Live Media Streaming (PALMS), makes use of combination of push-pull score-based incentive method to achieve high performance (in term of delay, stream continuity, cooperation, etc.). The main contribution of PALMS is that it reduces the end-to-end streaming delay and in turn results better delivered quality. Furthermore, with the implementation of score-based incentive mechanism, PALMS is resilient to existence of free-riders and encourage cooperation among participating nodes. We have extensively evaluated the performance of PALMS. Our experiments demonstrate that PALMS achieves good streaming quality even under the existence of free-riders.
Applied Sciences, 2018
In this paper, we propose acceleration methods for edge-preserving filtering. The filters nativel... more In this paper, we propose acceleration methods for edge-preserving filtering. The filters natively include denormalized numbers, which are defined in IEEE Standard 754. The processing of the denormalized numbers has a higher computational cost than normal numbers; thus, the computational performance of edge-preserving filtering is severely diminished. We propose approaches to prevent the occurrence of the denormalized numbers for acceleration. Moreover, we verify an effective vectorization of the edge-preserving filtering based on changes in microarchitectures of central processing units by carefully treating kernel weights. The experimental results show that the proposed methods are up to five-times faster than the straightforward implementation of bilateral filtering and non-local means filtering, while the filters maintain the high accuracy. In addition, we showed effective vectorization for each central processing unit microarchitecture. The implementation of the bilateral filte...
Applied Sciences, 2018
This study examines vectorized programming for finite impulse response image filtering. Finite im... more This study examines vectorized programming for finite impulse response image filtering. Finite impulse response image filtering occupies a fundamental place in image processing, and has several approximated acceleration algorithms. However, no sophisticated method of acceleration exists for parameter adaptive filters or any other complex filter. For this case, simple subsampling with code optimization is a unique solution. Under the current Moore’s law, increases in central processing unit frequency have stopped. Moreover, the usage of more and more transistors is becoming insuperably complex due to power and thermal constraints. Most central processing units have multi-core architectures, complicated cache memories, and short vector processing units. This change has complicated vectorized programming. Therefore, we first organize vectorization patterns of vectorized programming to highlight the computing performance of central processing units by revisiting the general finite impul...
It is possible to divide image sequences into some states, where e ach state has same motion phas... more It is possible to divide image sequences into some states, where e ach state has same motion phase. By applying such operation before r ecognition processing, accuracy of the recognition will be improved. New image sequence ltering by using HMM is proposed in this paper, which can divide image sequences into multiple states. Performance of the ltering is improved by doing re-learning of the observation symbol probability. In addition, eectiveness of the proposed method is shown by human identication using image sequences.
In recent year, Peer-to-Peer (P2P) approach for media streaming has been studied extensively. In ... more In recent year, Peer-to-Peer (P2P) approach for media streaming has been studied extensively. In comparison to on-demand media streaming, P2P live media streaming faces a much stringent time constraint. In order to improve the performance metrics, such as startup delay, source-to-end delay, and playback continuity, we present PALMS, a P2P approach for live media streaming where node employs gossip based pull and push protocols to receive and forward media data among connected nodes. We present a simple heuristic mechanism for the pull protocol in the selection of media segments and peers. Besides the pull method, a push method is deployed to increase the streaming quality. We also adopt a randomized push protocol in order to increase the probability of media data delivered to connected nodes. We know that the presence of freeriders could degrade the delivered streaming quality. In PALMS, a simple tit-for-tat incentive mechanism is adopted to discourage the existence of free-riders. We conducted simulations and performance comparisons for PALMS. Experimental results demonstrate that PALMS can deliver better streaming quality and more resilience towards the heterogeneity of network bandwidths as compared to some of the existing protocols.
Journal of Applied Sciences, 2008
Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of... more Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of Internet traffic by some estimates. Recently, we have been witnessing the emergence of a new class of popular P2P applications, namely, P2P audio and video streaming. In this paper, we propose and investigate a full distributed, scalable, and cooperative protocol for live video streaming in an overlay peer-to-peer network. Our protocol, termed P2P Super-Peer based Unstructured Live Media Streaming (PALMS-SP), makes use of combination of push-pull scheduling methods to achieve high performance (in term of delay, stream continuity, cooperation, etc.). The main contribution of PALMS-SP is that it reduces the end-to-end streaming delay and in turn results better delivered quality. We have extensively evaluated the performance of PALMS-SP. Our experiments demonstrate that PALMS-SP with the existence of super-peers achieves better streaming quality in comparison with other existing streaming applications.
Proc. Intl. Conf. On Software Engineering, Artificial …, 2001
Efficiently routing in a dynamic network is a important problem in ad hoc network according to de... more Efficiently routing in a dynamic network is a important problem in ad hoc network according to development of per-sonal data assistant (PDA) and wireless network equipment. However conventional routing algorithm is difficult to ap-ply to dynamic topology network. Q-Routing, ...
Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of... more Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of Internet traffic by some estimates. Recently, we have been witnessing the emergence of a new class of popular P2P applications, namely, P2P audio and video streaming. While traditional P2P file distribution applications target elastic data transfers, P2P streaming focuses on the efficient delivery of audio and video content under tight timing requirements. In these applications, each node independently selects some other nodes as its neighbors and exchanges streaming data with neighbors. In this paper, we propose and investigate a full distributed, scalable, and cooperative protocol for live video streaming in an overlay peer-to-peer network. Our protocol, termed P2P Super-Peer based Unstructured Live Media Streaming (PALMS-SP), makes use of combination of push-pull scheduling methods to achieve high performance (in term of delay, stream continuity, cooperation, etc.). The main contribution of PALMS-SP is that it reduces the endto-end streaming delay and in turn results better delivered quality. Furthermore, with the implementation of two-layer based overlay network that consists of super-peers and ordinary peers, PALMS-SP is able to leverage on the heterogeneity of bandwidths and simplify the complexity of transmission service and in turn shows better Quality of Service (QoS). We have extensively evaluated the performance of PALMS-SP. Our experiments demonstrate that PALMS-SP achieves good streaming quality with the existence of super-peers.
Replication is widely used in parallel and distributed systems for reliability and availability. ... more Replication is widely used in parallel and distributed systems for reliability and availability. On the other hand, developers have to consider minimum consistency requirement for each application. Therefore, novel replication protocol that ensure multiple consistency models is required. Multi-Consistency Data Replication (McRep) is a middleware-based replication protocol and can support multiple consistency models. However, McRep has a potential problem that a replicator (a server controlling replications) acting as a middleware can be a performance bottleneck. We propose a backend based replication protocol to solve this problem, while ensuring same consistency models. More precisely, we place the replicator on the backend area where the replicator communicates with only replica servers, and extend the replica’s role to control the consistency for read-only transactions. We implemented and evaluated both the proposal protocol and the McRep. The results showed that our protocol imp...
2013 IEEE 5th International Conference on Cloud Computing Technology and Science, 2013
ABSTRACT In current SDN paradigm, an edge-overlay (distributed tunneling) model using L2-in-L3 tu... more ABSTRACT In current SDN paradigm, an edge-overlay (distributed tunneling) model using L2-in-L3 tunneling protocols, such as VXLAN, has attracted attentions for multi-tenant data center networks. The edge-overlay model can establish rapid-deployment of virtual networks onto existing traditional network facilities, ensure flexible IP/MAC address allocation to VMs, and extend the number of virtual networks regardless of the VLAN ID limitation. However, such model has performance and incompatibility problems on the traditional network environment. For L2 data center networks, this paper proposes a pure software approach that uses Open Flow virtual switches to realize yet another edge-overlay without IP tunneling. Our model leverages a header rewriting method as well as a host-based VLAN ID usage to ensure address space isolation and scalability of the number of virtual networks. In our model, any special hardware equipments like Open Flow hardware switch are not required and only software-based virtual switches and the controller are used. In this paper, we evaluate the performance of the proposed model comparing with the tunneling model using GRE or VXLAN protocol. Our model showed better performance and less CPU usage. In addition, qualitative evaluations of the model are also conducted from a broader perspective.
2012 Third International Conference on Networking and Computing, 2012
We have proposed an auto-memoization processor based on computation reuse. The auto-memoization p... more We have proposed an auto-memoization processor based on computation reuse. The auto-memoization processor dynamically detects functions and loop iterations as reusable blocks, and memoizes them automatically. In the past model, computation reuse cannot be applied if the current input sequence even differs by only one input value from the past input sequences, since processing results will differ. This paper proposes a new partial reuse model, which can apply computation reuse to the early part of a reusable block as long as the early part of the current input sequence matches one of the past sequences. In addition, in order to acquire sufficient benefit from the partial reuse model, we also propose a technique that reduces the searching overhead for memoization table by partitioning it. The result of the experiment with SPEC CPU95 suite benchmarks shows that the new method improves the maximum speedup from 40.6% to 55.1%, and the average speedup from 10.6% to 22.8%.
We have proposed a technique that reform probabilistic routing algorithm ARH (Ant Routing with ro... more We have proposed a technique that reform probabilistic routing algorithm ARH (Ant Routing with routing history) using the characteristic of MANET. In the proposed technique, a node sends hello packet with a data packet, does’t use unreliable links, learns route using intercepted (not received) packet, and improve retransmitting method. In this paper, we compare with AODV (Ad hoc On-Demand Vector Routing), ARH, and proposed technique by conducting experiments in simulation.
A degree-constrained minimum spanning tree (d-MST) of a graph is a well-studied problem that has ... more A degree-constrained minimum spanning tree (d-MST) of a graph is a well-studied problem that has the importance in the design of communication and the electric power networks. In this study, we propose the formalization and distributed cooperation methods for d-MST problem in multi-agent systems. The proposed exact/approximate methods resemble the approaches that apply Distributed Constraint Optimization Problem. In the exact method, each agents propagate messages that represent set of sub-trees in a bottom-up manner. To reduce the number of sub-trees constraints of the d-MST are considered. Also, each agent partially drops the sub-trees based on several heuristics in approximate method. We experimentally compare the proposed techniques from the viewpoints of the quality of the solutions and the cost for tree construction.
Systems and Computers in Japan, 2006
Systems and Computers in Japan, 2006
In Multi-Agent Reinforcement Learning, each agent observe a state of other agents as a part of en... more In Multi-Agent Reinforcement Learning, each agent observe a state of other agents as a part of environment. Therefore, the state space is exponential in the number of agents and learning speed significantly decrease. Modular Q-learning [6] needs very small state space. However, the incomplete observation involves a decline in the performance. In this paper, we improve Modular Q-learning's performance with the partly high-dimensional state space.
Lecture Notes in Computer Science, 2000
Transactions of the Japanese Society for Artificial Intelligence, 2013
Distributed Constraint Optimization problems (DCOPs) have been studied as a fundamental model of ... more Distributed Constraint Optimization problems (DCOPs) have been studied as a fundamental model of multiagent cooperation. In traditional DCOPs, all agents cooperate to optimize the sum of their cost functions. However, in practical systems some agents may desire to select the value of their variables without cooperation. In special cases, such agents may take the values with the worst impact on the quality of the result reachable by the optimization process. Similar classes of problems have been studied as Quantified (Distributed) Constraint Problems, where the variables of the CSP have existential/universal quantifiers. All constraints should be satisfied independently of the value taken by universal variables. In this paper, a Quantified Distributed Constraint Optimization problem (QDCOP) that extends the framework of DCOPs is presented. We apply existential/universal quantifiers to distinct uncooperative variables. A universally quantified variable is left unassigned by the optimization as the result has to hold when it takes any value from its domain, while an existentially quantified variable takes exactly one of its values for each context. We consider that the QDCOP applies the concept of game tree search to DCOP. If the original problem is a minimization problem, agents that own universally quantified variables may intend to maximize the cost value in the worst case. Other agents normally intend to optimize the minimizing problems. Therefore, only the bounds, especially the upper bounds, of the optimal value are guaranteed. The purpose of the new class of problems is to compute such bounds, as well as to compute sub-optimal solutions. For the QDCOP, we propose solution methods that are based on min-max/alpha-beta and ADOPT algorithms.
Transactions of the Japanese Society for Artificial Intelligence, 2010
The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent ... more The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.
In this paper, we propose a Quantified Distributed Constraint Optimization problem (QDCOP) that e... more In this paper, we propose a Quantified Distributed Constraint Optimization problem (QDCOP) that extends the framework of Distributed Constraint Optimization problems (DCOPs). DCOPs have been studied as a fundamental model of multi-agent cooperation. In traditional DCOPs, all agents cooperate to optimize the sum of their cost functions. However, in practical systems some agents may desire to select the value of their variables without cooperation. In special cases, such agents may take the values with the worst impact on the quality of the result reachable by the optimization process. We apply existential/universal quantifiers to distinct uncooperative variables. A universally quantified variable is left unassigned by the optimization as the result has to hold when it takes any value from its domain, while an existentially quantified variable takes exactly one of its values for each context. Similar classes of problems have recently been studied as (Distributed) Quantified Constraint Problems, where the variables of the CSP have quantifiers. All constraints should be satisfied independently of the value taken by universal variables. We propose a QDCOP that applies the concept of game tree search to DCOP. If the original problem is a minimization problem, agents that own universally quantified variables may intend to maximize the cost value in the worst case. Other agents normally intend to optimize the minimizing problems. Therefore, only the bounds, especially the upper bounds, of the optimal value are guaranteed. The purpose of the new class of problems is to compute such bounds, as well as to compute sub-optimal solutions. For the QDCOP, we also propose several methods that are based on min-max/alpha-beta and ADOPT algorithms.
WSEAS TRANSACTIONS on COMMUNICATIONS archive, Feb 1, 2008
Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of... more Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of Internet traffic by some estimates. Recently, we have been witnessing the emergence of a new class of popular P2P applications, namely, P2P audio and video streaming. While traditional P2P file distribution applications target elastic data transfers, P2P streaming focuses on the efficient delivery of audio and video content under tight timing requirements. In these applications, each node independently selects some other nodes as its neighbors and exchanges streaming data with neighbors. In this paper, we propose and investigate a full distributed, scalable, and cooperative protocol for live video streaming in an overlay peer-to-peer network. Our protocol, termed P2P Unstructured Live Media Streaming (PALMS), makes use of combination of push-pull score-based incentive method to achieve high performance (in term of delay, stream continuity, cooperation, etc.). The main contribution of PALMS is that it reduces the end-to-end streaming delay and in turn results better delivered quality. Furthermore, with the implementation of score-based incentive mechanism, PALMS is resilient to existence of free-riders and encourage cooperation among participating nodes. We have extensively evaluated the performance of PALMS. Our experiments demonstrate that PALMS achieves good streaming quality even under the existence of free-riders.
Applied Sciences, 2018
In this paper, we propose acceleration methods for edge-preserving filtering. The filters nativel... more In this paper, we propose acceleration methods for edge-preserving filtering. The filters natively include denormalized numbers, which are defined in IEEE Standard 754. The processing of the denormalized numbers has a higher computational cost than normal numbers; thus, the computational performance of edge-preserving filtering is severely diminished. We propose approaches to prevent the occurrence of the denormalized numbers for acceleration. Moreover, we verify an effective vectorization of the edge-preserving filtering based on changes in microarchitectures of central processing units by carefully treating kernel weights. The experimental results show that the proposed methods are up to five-times faster than the straightforward implementation of bilateral filtering and non-local means filtering, while the filters maintain the high accuracy. In addition, we showed effective vectorization for each central processing unit microarchitecture. The implementation of the bilateral filte...
Applied Sciences, 2018
This study examines vectorized programming for finite impulse response image filtering. Finite im... more This study examines vectorized programming for finite impulse response image filtering. Finite impulse response image filtering occupies a fundamental place in image processing, and has several approximated acceleration algorithms. However, no sophisticated method of acceleration exists for parameter adaptive filters or any other complex filter. For this case, simple subsampling with code optimization is a unique solution. Under the current Moore’s law, increases in central processing unit frequency have stopped. Moreover, the usage of more and more transistors is becoming insuperably complex due to power and thermal constraints. Most central processing units have multi-core architectures, complicated cache memories, and short vector processing units. This change has complicated vectorized programming. Therefore, we first organize vectorization patterns of vectorized programming to highlight the computing performance of central processing units by revisiting the general finite impul...
It is possible to divide image sequences into some states, where e ach state has same motion phas... more It is possible to divide image sequences into some states, where e ach state has same motion phase. By applying such operation before r ecognition processing, accuracy of the recognition will be improved. New image sequence ltering by using HMM is proposed in this paper, which can divide image sequences into multiple states. Performance of the ltering is improved by doing re-learning of the observation symbol probability. In addition, eectiveness of the proposed method is shown by human identication using image sequences.
In recent year, Peer-to-Peer (P2P) approach for media streaming has been studied extensively. In ... more In recent year, Peer-to-Peer (P2P) approach for media streaming has been studied extensively. In comparison to on-demand media streaming, P2P live media streaming faces a much stringent time constraint. In order to improve the performance metrics, such as startup delay, source-to-end delay, and playback continuity, we present PALMS, a P2P approach for live media streaming where node employs gossip based pull and push protocols to receive and forward media data among connected nodes. We present a simple heuristic mechanism for the pull protocol in the selection of media segments and peers. Besides the pull method, a push method is deployed to increase the streaming quality. We also adopt a randomized push protocol in order to increase the probability of media data delivered to connected nodes. We know that the presence of freeriders could degrade the delivered streaming quality. In PALMS, a simple tit-for-tat incentive mechanism is adopted to discourage the existence of free-riders. We conducted simulations and performance comparisons for PALMS. Experimental results demonstrate that PALMS can deliver better streaming quality and more resilience towards the heterogeneity of network bandwidths as compared to some of the existing protocols.
Journal of Applied Sciences, 2008
Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of... more Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of Internet traffic by some estimates. Recently, we have been witnessing the emergence of a new class of popular P2P applications, namely, P2P audio and video streaming. In this paper, we propose and investigate a full distributed, scalable, and cooperative protocol for live video streaming in an overlay peer-to-peer network. Our protocol, termed P2P Super-Peer based Unstructured Live Media Streaming (PALMS-SP), makes use of combination of push-pull scheduling methods to achieve high performance (in term of delay, stream continuity, cooperation, etc.). The main contribution of PALMS-SP is that it reduces the end-to-end streaming delay and in turn results better delivered quality. We have extensively evaluated the performance of PALMS-SP. Our experiments demonstrate that PALMS-SP with the existence of super-peers achieves better streaming quality in comparison with other existing streaming applications.
Proc. Intl. Conf. On Software Engineering, Artificial …, 2001
Efficiently routing in a dynamic network is a important problem in ad hoc network according to de... more Efficiently routing in a dynamic network is a important problem in ad hoc network according to development of per-sonal data assistant (PDA) and wireless network equipment. However conventional routing algorithm is difficult to ap-ply to dynamic topology network. Q-Routing, ...
Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of... more Peer-to-peer (P2P) file sharing has become increasingly popular, accounting for as much as 70% of Internet traffic by some estimates. Recently, we have been witnessing the emergence of a new class of popular P2P applications, namely, P2P audio and video streaming. While traditional P2P file distribution applications target elastic data transfers, P2P streaming focuses on the efficient delivery of audio and video content under tight timing requirements. In these applications, each node independently selects some other nodes as its neighbors and exchanges streaming data with neighbors. In this paper, we propose and investigate a full distributed, scalable, and cooperative protocol for live video streaming in an overlay peer-to-peer network. Our protocol, termed P2P Super-Peer based Unstructured Live Media Streaming (PALMS-SP), makes use of combination of push-pull scheduling methods to achieve high performance (in term of delay, stream continuity, cooperation, etc.). The main contribution of PALMS-SP is that it reduces the endto-end streaming delay and in turn results better delivered quality. Furthermore, with the implementation of two-layer based overlay network that consists of super-peers and ordinary peers, PALMS-SP is able to leverage on the heterogeneity of bandwidths and simplify the complexity of transmission service and in turn shows better Quality of Service (QoS). We have extensively evaluated the performance of PALMS-SP. Our experiments demonstrate that PALMS-SP achieves good streaming quality with the existence of super-peers.
Replication is widely used in parallel and distributed systems for reliability and availability. ... more Replication is widely used in parallel and distributed systems for reliability and availability. On the other hand, developers have to consider minimum consistency requirement for each application. Therefore, novel replication protocol that ensure multiple consistency models is required. Multi-Consistency Data Replication (McRep) is a middleware-based replication protocol and can support multiple consistency models. However, McRep has a potential problem that a replicator (a server controlling replications) acting as a middleware can be a performance bottleneck. We propose a backend based replication protocol to solve this problem, while ensuring same consistency models. More precisely, we place the replicator on the backend area where the replicator communicates with only replica servers, and extend the replica’s role to control the consistency for read-only transactions. We implemented and evaluated both the proposal protocol and the McRep. The results showed that our protocol imp...
2013 IEEE 5th International Conference on Cloud Computing Technology and Science, 2013
ABSTRACT In current SDN paradigm, an edge-overlay (distributed tunneling) model using L2-in-L3 tu... more ABSTRACT In current SDN paradigm, an edge-overlay (distributed tunneling) model using L2-in-L3 tunneling protocols, such as VXLAN, has attracted attentions for multi-tenant data center networks. The edge-overlay model can establish rapid-deployment of virtual networks onto existing traditional network facilities, ensure flexible IP/MAC address allocation to VMs, and extend the number of virtual networks regardless of the VLAN ID limitation. However, such model has performance and incompatibility problems on the traditional network environment. For L2 data center networks, this paper proposes a pure software approach that uses Open Flow virtual switches to realize yet another edge-overlay without IP tunneling. Our model leverages a header rewriting method as well as a host-based VLAN ID usage to ensure address space isolation and scalability of the number of virtual networks. In our model, any special hardware equipments like Open Flow hardware switch are not required and only software-based virtual switches and the controller are used. In this paper, we evaluate the performance of the proposed model comparing with the tunneling model using GRE or VXLAN protocol. Our model showed better performance and less CPU usage. In addition, qualitative evaluations of the model are also conducted from a broader perspective.
2012 Third International Conference on Networking and Computing, 2012
We have proposed an auto-memoization processor based on computation reuse. The auto-memoization p... more We have proposed an auto-memoization processor based on computation reuse. The auto-memoization processor dynamically detects functions and loop iterations as reusable blocks, and memoizes them automatically. In the past model, computation reuse cannot be applied if the current input sequence even differs by only one input value from the past input sequences, since processing results will differ. This paper proposes a new partial reuse model, which can apply computation reuse to the early part of a reusable block as long as the early part of the current input sequence matches one of the past sequences. In addition, in order to acquire sufficient benefit from the partial reuse model, we also propose a technique that reduces the searching overhead for memoization table by partitioning it. The result of the experiment with SPEC CPU95 suite benchmarks shows that the new method improves the maximum speedup from 40.6% to 55.1%, and the average speedup from 10.6% to 22.8%.
We have proposed a technique that reform probabilistic routing algorithm ARH (Ant Routing with ro... more We have proposed a technique that reform probabilistic routing algorithm ARH (Ant Routing with routing history) using the characteristic of MANET. In the proposed technique, a node sends hello packet with a data packet, does’t use unreliable links, learns route using intercepted (not received) packet, and improve retransmitting method. In this paper, we compare with AODV (Ad hoc On-Demand Vector Routing), ARH, and proposed technique by conducting experiments in simulation.
A degree-constrained minimum spanning tree (d-MST) of a graph is a well-studied problem that has ... more A degree-constrained minimum spanning tree (d-MST) of a graph is a well-studied problem that has the importance in the design of communication and the electric power networks. In this study, we propose the formalization and distributed cooperation methods for d-MST problem in multi-agent systems. The proposed exact/approximate methods resemble the approaches that apply Distributed Constraint Optimization Problem. In the exact method, each agents propagate messages that represent set of sub-trees in a bottom-up manner. To reduce the number of sub-trees constraints of the d-MST are considered. Also, each agent partially drops the sub-trees based on several heuristics in approximate method. We experimentally compare the proposed techniques from the viewpoints of the quality of the solutions and the cost for tree construction.
Systems and Computers in Japan, 2006
Systems and Computers in Japan, 2006
In Multi-Agent Reinforcement Learning, each agent observe a state of other agents as a part of en... more In Multi-Agent Reinforcement Learning, each agent observe a state of other agents as a part of environment. Therefore, the state space is exponential in the number of agents and learning speed significantly decrease. Modular Q-learning [6] needs very small state space. However, the incomplete observation involves a decline in the performance. In this paper, we improve Modular Q-learning's performance with the partly high-dimensional state space.
Lecture Notes in Computer Science, 2000
Transactions of the Japanese Society for Artificial Intelligence, 2013
Distributed Constraint Optimization problems (DCOPs) have been studied as a fundamental model of ... more Distributed Constraint Optimization problems (DCOPs) have been studied as a fundamental model of multiagent cooperation. In traditional DCOPs, all agents cooperate to optimize the sum of their cost functions. However, in practical systems some agents may desire to select the value of their variables without cooperation. In special cases, such agents may take the values with the worst impact on the quality of the result reachable by the optimization process. Similar classes of problems have been studied as Quantified (Distributed) Constraint Problems, where the variables of the CSP have existential/universal quantifiers. All constraints should be satisfied independently of the value taken by universal variables. In this paper, a Quantified Distributed Constraint Optimization problem (QDCOP) that extends the framework of DCOPs is presented. We apply existential/universal quantifiers to distinct uncooperative variables. A universally quantified variable is left unassigned by the optimization as the result has to hold when it takes any value from its domain, while an existentially quantified variable takes exactly one of its values for each context. We consider that the QDCOP applies the concept of game tree search to DCOP. If the original problem is a minimization problem, agents that own universally quantified variables may intend to maximize the cost value in the worst case. Other agents normally intend to optimize the minimizing problems. Therefore, only the bounds, especially the upper bounds, of the optimal value are guaranteed. The purpose of the new class of problems is to compute such bounds, as well as to compute sub-optimal solutions. For the QDCOP, we propose solution methods that are based on min-max/alpha-beta and ADOPT algorithms.
Transactions of the Japanese Society for Artificial Intelligence, 2010
The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent ... more The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.