A Trust-based Mechanism for Avoiding Liars in Referring of Reputation in Multiagent System (original) (raw)
Related papers
Enhancing trust-based competitive multi agent systems by certified reputation
2012
Trust models play a crucial role in many application contexts involving competition like e-commerce, e-learning, etc. The above application domains deal with very-large-size sets of users with heterogeneous platforms. As a consequence, the Multi-Agent Systems paradigm appears to be one of the most promising approaches to apply in this context. Verifying the trustworthiness of a reputation feedback (recommendation) provided by an agent is a crucial issue to face when designing reputation models for competitive Multi-Agent Systems. In the past, the experience of the ART community highlighted that, in absence of information about the quality of the recommendation providers, it is better to exploit only the direct knowledge about the environment (i.e., a reliability measure), missing the reputation measure. However, when the size of the agent space becomes large enough, and the number of "expert" agents to contact is small, the use of just the reliability is little effective. Unfortunately, the largeness of the agent space makes the problem of the trustworthiness of recommendations very critical, so that the combination of reliability and reputation is not a trivial task. In this paper, we deal with the above problem by studying how the introduction of the notion of certified reputation, and its exploitation for combining reputation and reliability, can improve the performance of an agent in a competitive MAS context. We analyze different populations, using the standard platform ART, highlighting a significant positive impact and providing very interesting results.
CRM: An efficient trust and reputation model for agent computing
Knowledge-Based Systems, 2012
In open multi-agent systems, agents engage in interactions to share and exchange information. Due to the fact that these agents are self-interested, they may jeopardize mutual trust by not performing actions as they are expected to do. To this end, different models of trust have been proposed to assess the credibility of peers in the environment. These frameworks fail to consider and analyze the multiple factors impacting the trust. In this paper, we overcome this limit by proposing a comprehensive trust framework as a multi-factor model, which applies a number of measurements to evaluate the trust of interacting agents. First, this framework considers direct interactions among agents, and this part of the framework is called online trust estimation. Furthermore, after a variable interval of time, the actual performance of the evaluated agent is compared against the information provided by some other agents (consulting agents). This comparison in the off-line process leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error. What specifically distinguishes this work from the previous proposals in the same domain is its novelty in after-interaction investigation and performance analysis that prove the applicability of the proposed model in distributed multi-agent systems. In this paper, the agent structure and interaction mechanism of the proposed framework are described. A theoretical analysis of trust assessment and the system implementation along with simulations are also discussed. Finally, a comparison of our trust framework with other well-known frameworks from the literature is provided.
A trust model for new member in multiagent system
Vietnam Journal of Computer Science, 2015
Computational trust has been modelled for supporting agents in selecting partners in open and distributed multiagent systems. Most of current models are based on the experience of transactions in the past with a given partner or on reputation of a partner from other agents in the system. However, these models could not deal with the case, a new coming agent has no experience with partners or could not obtain the information about the reputation of partners. Then, agents in the system and the new one may encounter an obstacle in estimating trust on corresponding partners. In this paper, we introduce a novel mechanism for computing trust of a new coming partner by means of some similarity in a profile of the new agent and the ones of well-known agents. Experiments have been conducted to evaluate the proposed model in the scenario of an e-commerce environment. Our experimental results indicate that the combination model with similarity trust significantly improves computational results, in some particular situation, compared with some recent trust models.
T-REX: A Hybrid Agent Trust Model Based on Witness Reputation and Personal Experience
E-Commerce and Web …, 2010
Semantic Web will transform the way people satisfy their requests letting them delegate complex actions to intelligent agents, which will act on behalf of their users into real-life applications, under uncertain and risky situations. Thus, trust has already been recognized as a key issue in Multi-Agent Systems. Current computational trust models are usually built either on an agent's direct experience or reports provided by others. In order to combine the advantages and overcome the drawbacks of these approaches, namely interaction trust and witness reputation, this paper presents a hybrid trust model that combines them in a dynamic and flexible manner. The main advantage of our approach is that it provides a reliable and flexible model with low bandwidth and storage cost. Moreover, we present the integration of this model in JADE, a multi-agent framework and provide an evaluation and an e-Commerce scenario that illustrate the usability of the proposed model.
TruMet: An approach towards computing trust in multi-agent environment
2006
The growing popularity of multi-agent based approaches towards the formation and operation of virtual organizations (VO) present over the Internet, offer both opportunities and risks. One of the risks involved in such community is in the identification of trustworthy agent partners for transaction. In this paper we aim to describe our trust model which would contribute in measuring trust in the interacting agents. Named as TruMet, the trust metric model works on the basis of the parameters that we have identified as relevant to the features of the community. The model primarily analyses trust value on the basis of the agent's reputation, as provided by the agent itself, and the agent's aggregate rating as provided by the witness agents. The final computation of the trust value is given by a weighted average of these two components. While computing the aggregate rating, a weight based method has been adopted to discount the contribution of possibly un-fair ratings by the witness agents.
A probabilistic trust model for handling inaccurate reputation sources
Trust Management, 2005
This research aims to develop a model of trust and reputation that will ensure good interactions amongst software agents in large scale open systems in particular. The following are key drivers for our model: (1) agents may be self-interested and may provide false accounts of experiences with other agents if it is beneficial for them to do so; (2) agents will need to interact with other agents with which they have no past experience. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents. When there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.
Integrating trust measures in multiagent systems
International Journal of Intelligent Systems, 2012
Several models have been proposed in the past for representing both reliability and reputation. However, we remark that a crucial point in the practical use of these two measures is represented by the possibility of suitably combining them to support the agent's decision. In the past, we proposed a reliability-reputation model, called RRAF, that allows the user to choose how much importance to give to the reliability with respect to the reputation. However, RRAF shows some limitations, namely: (i) The weight to assign to the reliability vs reputation is arbitrarily set by the user, without considering the system evolution; (ii) The trust measure that an agent a perceives about an agent b is completely independent of the trust measure perceived by each other agent c, while in the reality the trust measures are mutually dependent. In this paper, we propose an extension of RRAF, aiming at facing the limitations above. In particular, we introduce a new trust reputation model, called TRR, that considers, from a mathematical viewpoint, the interdependence among all the trust measures computed in the systems. Moreover, this model dynamically computes a parameter measuring the importance of the reliability with respect to the reputation. Some experiments performed on the well-known ART platform show the significant advantages in terms of effectiveness introduced by TRR with respect to RRAF.
A computational trust model for multi-agent interactions based on confidence and reputation
2003
In open environments in which autonomous agents can break contracts, computational models of trust have an important role to play in determining who to interact with and how interactions unfold. To this end, we develop such a trust model, based on confidence and reputation, and show how it can be concretely applied, using fuzzy sets, to guide agents in evaluating past interactions and in establishing new contracts with one another.
Devising a Trust Model for Multi-Agent Interactions Using Confidence and Reputation
Applied Artificial Intelligence, 2004
In open environments in which autonomous agents can break contracts, computational models of trust have an important role to play in determining who to interact with and how interactions unfold. To this end, we develop such a trust model, based on confidence and reputation, and show how it can be concretely applied, using fuzzy sets, to guide agents in evaluating past interactions and in establishing new contracts with one another.
An approach to comprehensive trust management in multi-agent systems with credibility
2008
8ecurity is a substantial concept in multi-agent systems where agents dynamically enter and leave the system. Different models of trust have been proposed to assist agents in deciding whether to interact with requesters who are not known (or not very well known) by the service provider. To this end, in this paper we progress our work on security for agent-based systems, which is embedded in service provider's trust evaluation of the counter part. Agents are autonomous software equipped with advanced communication (using public dialogue game-based protocols and private strategies on how to use these protocols) and reasoning capabilities. The service provider agent obtains reports provided by trustworthy agents (regarding to direct interaction histories) and referee agents (in the form of recommendations) and combines a number of measurements, such as number of interactions and timely relevance, to pr~vide an overall estimation of a particular agent's likely behavior. Requesting this agent, called the target agent, to provide the number of interactions it had with each agent, the service provider penalizes the agents who lied about having information for trust evaluation process. In addition, after a periodic time, the actual behavior of the target agent is compared against the information provided by others. This comparison leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error. Overall the proposed framework is shown to assist agents effectively perform the trust estimation of interacting agents.