Robin Burke | DePaul University (original) (raw)

Papers by Robin Burke

Research paper thumbnail of Multistakeholder Recommender Systems

Recommender Systems Handbook

Research paper thumbnail of Fairness in Information Access Systems

Recommendation, information retrieval, and other information access systems pose unique challenge... more Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classification, the multistakeholder nature of information access applications, the rank-based problem setting, the centrality of personalization in many cases, and the role of user response complicate the problem of identifying precisely what types and operationalizations of fairness may be relevant, let alone measuring or promoting them. In this monograph, we present a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic. We preface this with brief introductions to information access and algorithmic fairness, to facilitate use of this work by scholars with experience in one (or neither) of these ...

Research paper thumbnail of Algorithm Selection with Librec-auto

Due to the complexity of recommendation algorithms, experimentation on recommender systems has be... more Due to the complexity of recommendation algorithms, experimentation on recommender systems has become a challenging task. Current recommendation algorithms, while powerful, involve large numbers of hyperparameters. Tuning hyperparameters for finding the best recommendation outcome often requires execution of large numbers of algorithmic experiments particularly when multiples evaluation metrics are considered. Existing recommender systems platforms fail to provide a basis for systematic experimentation of this type. In this paper, we describe librec-auto, a wrapper for the well-known LibRec library, which provides an environment that supports automated experimentation.

Research paper thumbnail of Crank up the Volume: Preference Bias Amplification in Collaborative Recommendation

ArXiv, 2019

Recommender systems are personalized: we expect the results given to a particular user to reflect... more Recommender systems are personalized: we expect the results given to a particular user to reflect that user's preferences. Some researchers have studied the notion of calibration, how well recommendations match users' stated preferences, and bias disparity the extent to which mis-calibration affects different user groups. In this paper, we examine bias disparity over a range of different algorithms and for different item categories and demonstrate significant differences between model-based and memory-based algorithms.

Research paper thumbnail of Adapting to User Preference Changes in Interactive Recommendation

Recommender systems have become essential tools in many application areas as they help alleviate ... more Recommender systems have become essential tools in many application areas as they help alleviate information overload by tailoring their recommendations to users' personal preferences. Users' interests in items, however, may change over time depending on their current situation. Without considering the current circumstances of a user, recommendations may match the general preferences of the user, but they may have small utility for the user in his/her current situation. We focus on designing systems that interact with the user over a number of iterations and at each step receive feedback from the user in the form of a reward or utility value for the recommended items. The goal of the system is to maximize the sum of obtained utilities over each interaction session. We use a multi-armed bandit strategy to model this online learning problem and we propose techniques for detecting changes in user preferences. The recommendations are then generated based on the most recent prefe...

Research paper thumbnail of Synthetic Attribute Data for Evaluating Consumer-side Fairness

ArXiv, 2018

When evaluating recommender systems for their fairness, it may be necessary to make use of demogr... more When evaluating recommender systems for their fairness, it may be necessary to make use of demographic attributes, which are personally sensitive and usually excluded from publicly-available data sets. In addition, these attributes are fixed and therefore it is not possible to experiment with different distributions using the same data. In this paper, we describe the Frequency-Linked Attribute Generation (FLAG) algorithm, and show its applicability for assigning synthetic demographic attributes to recommendation data sets.

Research paper thumbnail of Multisided Fairness for Recommendation

ArXiv, 2017

Recent work on machine learning has begun to consider issues of fairness. In this paper, we exten... more Recent work on machine learning has begun to consider issues of fairness. In this paper, we extend the concept of fairness to recommendation. In particular, we show that in some recommendation contexts, fairness may be a multisided concept, in which fair outcomes for multiple individuals need to be considered. Based on these considerations, we present a taxonomy of classes of fairness-aware recommender systems and suggest possible fairness-aware recommendation architectures.

Research paper thumbnail of Using Uncertain Graphs to Automatically Generate Event Flows from News Stories

Capturing the branching flow of events described in text aids a host of tasks, from summarization... more Capturing the branching flow of events described in text aids a host of tasks, from summarization to narrative generation to classification and prediction of events at points along the flow. In this paper, we present a framework for the automatic generation of an uncertain, temporally directed event graph from online sources such as news stories or social media posts. The vertices are generated using Natural Language Processing techniques on the source documents and the probabilities associated with edges, indicating the degree of certainty those connections exist, are derived based on shared entities among events. Graph edges are directed based on temporal information on events. Furthermore, we apply uncertain graph clustering in order to reduce noise and focus on higher-level event flows. Preliminary results indicate the uncertain event graph produces a coherent navigation through events described in a corpus.

Research paper thumbnail of User Segmentation for Controlling Recommendation Diversity

The quality of recommendations is known to be affected by diversity and novelty in addition to ac... more The quality of recommendations is known to be affected by diversity and novelty in addition to accuracy. Recent work has focused on methods that increase diversity of recommendation lists. However, these methods assume the user preference for diversity is constant across all users. In this paper, we show that users’ propensity towards diversity varies greatly and argue that the diversity of recommendation lists should be consistent with the level of user interest in diverse recommendations. We introduce a user segmentation approach in order to personalize recommendation according to user preference for diversity. We show that recommendations generated using these segments match the diversity preferences of users in each segment. We also discuss the impact of this segmentation on the novelty of recommendations.

Research paper thumbnail of Exploring User Opinions of Fairness in Recommender Systems

Algorithmic fairness for artificial intelligence has become increasingly relevant as these system... more Algorithmic fairness for artificial intelligence has become increasingly relevant as these systems become more pervasive in society. One realm of AI, recommender systems, presents unique challenges for fairness due to trade offs between optimizing accuracy for users and fairness to providers. But what is fair in the context of recommendation--particularly when there are multiple stakeholders? In an initial exploration of this problem, we ask users what their ideas of fair treatment in recommendation might be, and why. We analyze what might cause discrepancies or changes between user's opinions towards fairness to eventually help inform the design of fairer and more transparent recommendation algorithms.

Research paper thumbnail of Fairness and Discrimination in Information Access Systems

Recommendation, information retrieval, and other information access systems pose unique challenge... more Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classi€cation, the multistakeholder nature of information access applications, the rank-based problem se‹ing, the centrality of personalization in many cases, and the role of user response complicate the problem of identifying precisely what types and operationalizations of fairness may be relevant, let alone measuring or promoting them. In this monograph, we present a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic. We preface this with brief introductions to information access and algorithmic fairness, to facilitate use of this work by scholars with experience in one (or neither) of these €e...

Research paper thumbnail of User Factor Adaptation for User Embedding via Multitask Learning

Language varies across users and their interested fields in social media data: words authored by ... more Language varies across users and their interested fields in social media data: words authored by a user across his/her interests may have different meanings (e.g., cool) or sentiments (e.g., fast). However, most of the existing methods to train user embeddings ignore the variations across user interests, such as product and movie categories (e.g., drama vs. action). In this study, we treat the user interest as domains and empirically examine how the user language can vary across the user factor in three English social media datasets. We then propose a user embedding model to account for the language variability of user interests via a multitask learning framework. The model learns user language and its variations without human supervision. While existing work mainly evaluated the user embedding by extrinsic tasks, we propose an intrinsic evaluation via clustering and evaluate user embeddings by an extrinsic task, text classification. The experiments on the three English-language soc...

Research paper thumbnail of Personalization, Fairness, and Post-Userism

Perspectives on Digital Humanism, 2021

The incorporation of fairness-aware machine learning presents a challenge for creators of persona... more The incorporation of fairness-aware machine learning presents a challenge for creators of personalized systems, such as recommender systems found in e-commerce, social media, and elsewhere. These systems are designed and promulgated as providing services tailored to each individual user’s unique needs. However, fairness may require that other objectives, possibly in conflict with personalization, also be satisfied. The theoretical framework of post-userism, which broadens the focus of design in HCI settings beyond the individual end user, provides an avenue for this integration. However, in adopting this approach, developers will need to offer new, more complex narratives of what personalized systems do and whose needs they serve.

Research paper thumbnail of Fairness and Transparency in Recommendation: The Users’ Perspective

Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 2021

Though recommender systems are defined by personalization, recent work has shown the importance o... more Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features-informed by the needs of our participants-that could improve user understanding of and trust in fairness-aware recommender systems.

Research paper thumbnail of Vams 2017

Proceedings of the Eleventh ACM Conference on Recommender Systems, 2017

In this paper, we summarize VAMS 2017 - a workshop on value-aware and multistakeholder recommenda... more In this paper, we summarize VAMS 2017 - a workshop on value-aware and multistakeholder recommendation co-located with RecSys 2017. The workshop encouraged forward-thinking papers in this new area of recommender systems research and obtained a diverse set of responses ranging from application results to research overviews.

Research paper thumbnail of Personalized fairness-aware re-ranking for microlending

Proceedings of the 13th ACM Conference on Recommender Systems, 2019

Microlending can lead to improved access to capital in impoverished countries. Recommender system... more Microlending can lead to improved access to capital in impoverished countries. Recommender systems could be used in microlending to provide efficient and personalized service to lenders. However, increasing concerns about discrimination in machine learning hinder the application of recommender systems to the microfinance industry. Most previous recommender systems focus on pure personalization, with fairness issue largely ignored. A desirable fairness property in microlending is to give borrowers from different demographic groups a fair chance of being recommended, as stated by Kiva. To achieve this goal, we propose a Fairness-Aware Re-ranking (FAR) algorithm to balance ranking quality and borrower-side fairness. Furthermore, we take into consideration that lenders may differ in their receptivity to the diversification of recommended loans, and develop a Personalized Fairness-Aware Re-ranking (PFAR) algorithm. Experiments on a real-world dataset from Kiva.org show that our re-ranking algorithm can significantly promote fairness with little sacrifice in accuracy, and be attentive to individual lender preference on loan diversity.

Research paper thumbnail of Emotions in Context-Aware Recommender Systems

Human–Computer Interaction Series, 2016

Recommender systems are decision aids that offer users personalized suggestions for products and ... more Recommender systems are decision aids that offer users personalized suggestions for products and other items. Context-aware recommender systems are an important subclass of recommender systems that take into account the context in which an item will be consumed or experienced. In context-aware recommendation research, a number of contextual features have been identified as important in different recommendation applications: such as companion in the movie domain, time and mood in the music domain, and weather or season in the travel domain. Emotions have also been demonstrated to be significant contextual factors in a variety of recommendation scenarios. In this chapter, we describe the role of emotions in context-aware recommendation, including defining and acquiring emotional features for recommendation purposes, incorporating such features into recommendation algorithms. We conclude with a sample evaluation , showing the utility of emotion in recommendation generation.

Research paper thumbnail of Similarity-Based Context-Aware Recommendation

Lecture Notes in Computer Science, 2015

Context-aware recommender systems CARS take context into consideration when modeling user prefere... more Context-aware recommender systems CARS take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to context-aware recommendation than modeling contextual rating deviations.

Research paper thumbnail of Recommender Systems and User Modeling

Research paper thumbnail of Strategic retrieval of tutorial stories

This paper describes SPIEL, a system for retrieving and presenting tutorial stories for students ... more This paper describes SPIEL, a system for retrieving and presenting tutorial stories for students who are using a social simulation to learn social skills. SPIEL's task is primarily retrieval, but it requires techniques from casebased reasoning to perform it. SPIEL's stories are stored in video form, which prevents the use of text-based processing or indexing. Instead of using a story's text, SPIEL uses complex structured indices intended to represent what the story is about.

Research paper thumbnail of Multistakeholder Recommender Systems

Recommender Systems Handbook

Research paper thumbnail of Fairness in Information Access Systems

Recommendation, information retrieval, and other information access systems pose unique challenge... more Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classification, the multistakeholder nature of information access applications, the rank-based problem setting, the centrality of personalization in many cases, and the role of user response complicate the problem of identifying precisely what types and operationalizations of fairness may be relevant, let alone measuring or promoting them. In this monograph, we present a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic. We preface this with brief introductions to information access and algorithmic fairness, to facilitate use of this work by scholars with experience in one (or neither) of these ...

Research paper thumbnail of Algorithm Selection with Librec-auto

Due to the complexity of recommendation algorithms, experimentation on recommender systems has be... more Due to the complexity of recommendation algorithms, experimentation on recommender systems has become a challenging task. Current recommendation algorithms, while powerful, involve large numbers of hyperparameters. Tuning hyperparameters for finding the best recommendation outcome often requires execution of large numbers of algorithmic experiments particularly when multiples evaluation metrics are considered. Existing recommender systems platforms fail to provide a basis for systematic experimentation of this type. In this paper, we describe librec-auto, a wrapper for the well-known LibRec library, which provides an environment that supports automated experimentation.

Research paper thumbnail of Crank up the Volume: Preference Bias Amplification in Collaborative Recommendation

ArXiv, 2019

Recommender systems are personalized: we expect the results given to a particular user to reflect... more Recommender systems are personalized: we expect the results given to a particular user to reflect that user's preferences. Some researchers have studied the notion of calibration, how well recommendations match users' stated preferences, and bias disparity the extent to which mis-calibration affects different user groups. In this paper, we examine bias disparity over a range of different algorithms and for different item categories and demonstrate significant differences between model-based and memory-based algorithms.

Research paper thumbnail of Adapting to User Preference Changes in Interactive Recommendation

Recommender systems have become essential tools in many application areas as they help alleviate ... more Recommender systems have become essential tools in many application areas as they help alleviate information overload by tailoring their recommendations to users' personal preferences. Users' interests in items, however, may change over time depending on their current situation. Without considering the current circumstances of a user, recommendations may match the general preferences of the user, but they may have small utility for the user in his/her current situation. We focus on designing systems that interact with the user over a number of iterations and at each step receive feedback from the user in the form of a reward or utility value for the recommended items. The goal of the system is to maximize the sum of obtained utilities over each interaction session. We use a multi-armed bandit strategy to model this online learning problem and we propose techniques for detecting changes in user preferences. The recommendations are then generated based on the most recent prefe...

Research paper thumbnail of Synthetic Attribute Data for Evaluating Consumer-side Fairness

ArXiv, 2018

When evaluating recommender systems for their fairness, it may be necessary to make use of demogr... more When evaluating recommender systems for their fairness, it may be necessary to make use of demographic attributes, which are personally sensitive and usually excluded from publicly-available data sets. In addition, these attributes are fixed and therefore it is not possible to experiment with different distributions using the same data. In this paper, we describe the Frequency-Linked Attribute Generation (FLAG) algorithm, and show its applicability for assigning synthetic demographic attributes to recommendation data sets.

Research paper thumbnail of Multisided Fairness for Recommendation

ArXiv, 2017

Recent work on machine learning has begun to consider issues of fairness. In this paper, we exten... more Recent work on machine learning has begun to consider issues of fairness. In this paper, we extend the concept of fairness to recommendation. In particular, we show that in some recommendation contexts, fairness may be a multisided concept, in which fair outcomes for multiple individuals need to be considered. Based on these considerations, we present a taxonomy of classes of fairness-aware recommender systems and suggest possible fairness-aware recommendation architectures.

Research paper thumbnail of Using Uncertain Graphs to Automatically Generate Event Flows from News Stories

Capturing the branching flow of events described in text aids a host of tasks, from summarization... more Capturing the branching flow of events described in text aids a host of tasks, from summarization to narrative generation to classification and prediction of events at points along the flow. In this paper, we present a framework for the automatic generation of an uncertain, temporally directed event graph from online sources such as news stories or social media posts. The vertices are generated using Natural Language Processing techniques on the source documents and the probabilities associated with edges, indicating the degree of certainty those connections exist, are derived based on shared entities among events. Graph edges are directed based on temporal information on events. Furthermore, we apply uncertain graph clustering in order to reduce noise and focus on higher-level event flows. Preliminary results indicate the uncertain event graph produces a coherent navigation through events described in a corpus.

Research paper thumbnail of User Segmentation for Controlling Recommendation Diversity

The quality of recommendations is known to be affected by diversity and novelty in addition to ac... more The quality of recommendations is known to be affected by diversity and novelty in addition to accuracy. Recent work has focused on methods that increase diversity of recommendation lists. However, these methods assume the user preference for diversity is constant across all users. In this paper, we show that users’ propensity towards diversity varies greatly and argue that the diversity of recommendation lists should be consistent with the level of user interest in diverse recommendations. We introduce a user segmentation approach in order to personalize recommendation according to user preference for diversity. We show that recommendations generated using these segments match the diversity preferences of users in each segment. We also discuss the impact of this segmentation on the novelty of recommendations.

Research paper thumbnail of Exploring User Opinions of Fairness in Recommender Systems

Algorithmic fairness for artificial intelligence has become increasingly relevant as these system... more Algorithmic fairness for artificial intelligence has become increasingly relevant as these systems become more pervasive in society. One realm of AI, recommender systems, presents unique challenges for fairness due to trade offs between optimizing accuracy for users and fairness to providers. But what is fair in the context of recommendation--particularly when there are multiple stakeholders? In an initial exploration of this problem, we ask users what their ideas of fair treatment in recommendation might be, and why. We analyze what might cause discrepancies or changes between user's opinions towards fairness to eventually help inform the design of fairer and more transparent recommendation algorithms.

Research paper thumbnail of Fairness and Discrimination in Information Access Systems

Recommendation, information retrieval, and other information access systems pose unique challenge... more Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems. While fair information access shares many commonalities with fair classi€cation, the multistakeholder nature of information access applications, the rank-based problem se‹ing, the centrality of personalization in many cases, and the role of user response complicate the problem of identifying precisely what types and operationalizations of fairness may be relevant, let alone measuring or promoting them. In this monograph, we present a taxonomy of the various dimensions of fair information access and survey the literature to date on this new and rapidly-growing topic. We preface this with brief introductions to information access and algorithmic fairness, to facilitate use of this work by scholars with experience in one (or neither) of these €e...

Research paper thumbnail of User Factor Adaptation for User Embedding via Multitask Learning

Language varies across users and their interested fields in social media data: words authored by ... more Language varies across users and their interested fields in social media data: words authored by a user across his/her interests may have different meanings (e.g., cool) or sentiments (e.g., fast). However, most of the existing methods to train user embeddings ignore the variations across user interests, such as product and movie categories (e.g., drama vs. action). In this study, we treat the user interest as domains and empirically examine how the user language can vary across the user factor in three English social media datasets. We then propose a user embedding model to account for the language variability of user interests via a multitask learning framework. The model learns user language and its variations without human supervision. While existing work mainly evaluated the user embedding by extrinsic tasks, we propose an intrinsic evaluation via clustering and evaluate user embeddings by an extrinsic task, text classification. The experiments on the three English-language soc...

Research paper thumbnail of Personalization, Fairness, and Post-Userism

Perspectives on Digital Humanism, 2021

The incorporation of fairness-aware machine learning presents a challenge for creators of persona... more The incorporation of fairness-aware machine learning presents a challenge for creators of personalized systems, such as recommender systems found in e-commerce, social media, and elsewhere. These systems are designed and promulgated as providing services tailored to each individual user’s unique needs. However, fairness may require that other objectives, possibly in conflict with personalization, also be satisfied. The theoretical framework of post-userism, which broadens the focus of design in HCI settings beyond the individual end user, provides an avenue for this integration. However, in adopting this approach, developers will need to offer new, more complex narratives of what personalized systems do and whose needs they serve.

Research paper thumbnail of Fairness and Transparency in Recommendation: The Users’ Perspective

Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 2021

Though recommender systems are defined by personalization, recent work has shown the importance o... more Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features-informed by the needs of our participants-that could improve user understanding of and trust in fairness-aware recommender systems.

Research paper thumbnail of Vams 2017

Proceedings of the Eleventh ACM Conference on Recommender Systems, 2017

In this paper, we summarize VAMS 2017 - a workshop on value-aware and multistakeholder recommenda... more In this paper, we summarize VAMS 2017 - a workshop on value-aware and multistakeholder recommendation co-located with RecSys 2017. The workshop encouraged forward-thinking papers in this new area of recommender systems research and obtained a diverse set of responses ranging from application results to research overviews.

Research paper thumbnail of Personalized fairness-aware re-ranking for microlending

Proceedings of the 13th ACM Conference on Recommender Systems, 2019

Microlending can lead to improved access to capital in impoverished countries. Recommender system... more Microlending can lead to improved access to capital in impoverished countries. Recommender systems could be used in microlending to provide efficient and personalized service to lenders. However, increasing concerns about discrimination in machine learning hinder the application of recommender systems to the microfinance industry. Most previous recommender systems focus on pure personalization, with fairness issue largely ignored. A desirable fairness property in microlending is to give borrowers from different demographic groups a fair chance of being recommended, as stated by Kiva. To achieve this goal, we propose a Fairness-Aware Re-ranking (FAR) algorithm to balance ranking quality and borrower-side fairness. Furthermore, we take into consideration that lenders may differ in their receptivity to the diversification of recommended loans, and develop a Personalized Fairness-Aware Re-ranking (PFAR) algorithm. Experiments on a real-world dataset from Kiva.org show that our re-ranking algorithm can significantly promote fairness with little sacrifice in accuracy, and be attentive to individual lender preference on loan diversity.

Research paper thumbnail of Emotions in Context-Aware Recommender Systems

Human–Computer Interaction Series, 2016

Recommender systems are decision aids that offer users personalized suggestions for products and ... more Recommender systems are decision aids that offer users personalized suggestions for products and other items. Context-aware recommender systems are an important subclass of recommender systems that take into account the context in which an item will be consumed or experienced. In context-aware recommendation research, a number of contextual features have been identified as important in different recommendation applications: such as companion in the movie domain, time and mood in the music domain, and weather or season in the travel domain. Emotions have also been demonstrated to be significant contextual factors in a variety of recommendation scenarios. In this chapter, we describe the role of emotions in context-aware recommendation, including defining and acquiring emotional features for recommendation purposes, incorporating such features into recommendation algorithms. We conclude with a sample evaluation , showing the utility of emotion in recommendation generation.

Research paper thumbnail of Similarity-Based Context-Aware Recommendation

Lecture Notes in Computer Science, 2015

Context-aware recommender systems CARS take context into consideration when modeling user prefere... more Context-aware recommender systems CARS take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to context-aware recommendation than modeling contextual rating deviations.

Research paper thumbnail of Recommender Systems and User Modeling

Research paper thumbnail of Strategic retrieval of tutorial stories

This paper describes SPIEL, a system for retrieving and presenting tutorial stories for students ... more This paper describes SPIEL, a system for retrieving and presenting tutorial stories for students who are using a social simulation to learn social skills. SPIEL's task is primarily retrieval, but it requires techniques from casebased reasoning to perform it. SPIEL's stories are stored in video form, which prevents the use of text-based processing or indexing. Instead of using a story's text, SPIEL uses complex structured indices intended to represent what the story is about.