Trusting in others' biases: Fostering guarded trust in collaborative filtering and recommender systems (original) (raw)
Related papers
Recommender systems (RS) have been used for suggesting items (movies, books, songs, etc.) that users might like. RSs compute a user similarity between users and use it as a weight for the users' ratings. However they have many weaknesses, such as sparseness, cold start and vulnerability to attacks. We assert that these weaknesses can be alleviated using a Trust-aware system that takes into account the "web of trust" provided by every user. Specifically, we analyze data from the popular Internet web site epinions.com. The dataset consists of 49290 users who expressed reviews (with rating) on items and explicitly specified their web of trust, i.e. users whose reviews they have consistently found to be valuable. We show that any two users have usually few items rated in common. For this reason, the classic RS technique is often ineffective and is not able to compute a user similarity weight for many of the users. Instead exploiting the webs of trust, it is possible to propagate trust and infer an additional weight for other users. We show how this quantity can be computed against a larger number of users.
CHI Conference on Human Factors in Computing Systems
Three of the most common approaches used in recommender systems are content-based fltering (matching users' preferences with products' characteristics), collaborative fltering (matching users with similar preferences), and demographic fltering (catering to users based on demographic characteristics). Do users' intuitions lead them to trust one of these approaches over others, independent of the actual operations of these diferent systems? Does their faith in one type or another depend on the quality of the recommendation, rather than how the recommendation appears to have been derived? We conducted an empirical study with a prototype of a movie recommender system to fnd out. A 3 (Ostensible Recommender Type: Content vs. Collaborative vs. Demographic Filtering) x 2 (Recommendation Quality: Good vs. Bad) experiment (N =226) investigated how users evaluate systems and attribute responsibility for the recommendations they receive. We found that users trust systems that use collaborative fltering more, regardless of the system's performance. They think that they themselves are responsible for good recommendations but that the system is responsible for bad recommendations (refecting a self-serving bias). Theoretical insights, design implications and practical solutions for the cold start problem are discussed.
2005
Abstract Recommender systems have proven to be an important response to the information overload problem, by providing users with more proactive and personalized information services. And collaborative filtering techniques have proven to be an vital component of many such recommender systems as they facilitate the generation of high-quality recom-mendations by leveraging the preferences of communities of similar users. In this paper we suggest that the traditional emphasis on user similarity may be overstated.
Dynamics of human trust in recommender systems
Proceedings of the 8th ACM Conference on Recommender systems - RecSys '14, 2014
The trust that humans place on recommendations is key to the success of recommender systems. The formation and decay of trust in recommendations is a dynamic process influenced by context, human preferences, accuracy of recommendations, and the interactions of these factors. This paper describes two psychological experiments (N=400) that evaluate the evolution of trust in recommendations over time, under personalized and nonpersonalized recommendations by matching or not matching a participant's profile. Main findings include: Humans trust inaccurate recommendations more than they should; when recommendations are personalized, they lose trust in inaccurate recommendations faster than when recommendations are not personalized; and participants report less trust and lower overall ratings of personalized but inaccurate recommendations compared to not-personalized inaccurate recommendations. We make connections to the possible implications of these psychological findings to the design of recommender systems.
Trust and Trustworthiness in Social Recommender Systems
Companion Proceedings of The 2019 World Wide Web Conference on - WWW '19, 2019
The prevalence of misinformation on online social media has tangible empirical connections to increasing political polarization and partisan antipathy in the United States. Ranking algorithms for social recommendation often encode broad assumptions about network structure (like homophily) and group cognition (like, social action is largely imitative). Assumptions like these can be naïve and exclusionary in the era of fake news and ideological uniformity towards the political poles. We examine these assumptions with aid from the user-centric framework of trustworthiness in social recommendation. The constituent dimensions of trustworthiness (diversity, transparency, explainability, disruption) highlight new opportunities for discouraging dogmatization and building decision-aware, transparent news recommender systems.
Enhancing the trust-based recommendation process with explicit distrust
ACM Transactions on the Web, 2013
When a Web application with a built-in recommender offers a social networking component which enables its users to form a trust network, it can generate more personalized recommendations by combining user ratings with information from the trust network. These are the so-called trust-enhanced recommendation systems. While research on the incorporation of trust for recommendations is thriving, the potential of explicitly stated distrust remains almost unexplored. In this article, we introduce a distrust-enhanced recommendation algorithm which has its roots in Golbeck's trust-based weighted mean. Through experiments on a set of reviews from Epinions.com, we show that our new algorithm outperforms its standard trust-only counterpart with respect to accuracy, thereby demonstrating the positive effect that explicit distrust can have on trust-based recommendations.