Paras Sheth - Academia.edu (original) (raw)

Papers by Paras Sheth

Research paper thumbnail of Causal Disentanglement with Network Information for Debiased Recommendations

Recommender systems aim to recommend new items to users by learning user and item representations... more Recommender systems aim to recommend new items to users by learning user and item representations. In practice, these representations are highly entangled as they consist of information about multiple factors, including user's interests, item attributes along with confounding factors such as user conformity, and item popularity. Considering these entangled representations for inferring user preference may lead to biased recommendations (e.g., when the recommender model recommends popular items even if they do not align with the user's interests). Recent research proposes to debias by modeling a recommender system from a causal perspective. The exposure and the ratings are analogous to the treatment and the outcome in the causal inference framework, respectively. The critical challenge in this setting is accounting for the hidden confounders. These confounders are unobserved, making it hard to measure them. On the other hand, since these confounders affect both the exposure and the ratings, it is essential to account for them in generating debiased recommendations. To better approximate hidden confounders, we propose to leverage network information (i.e., user-social and user-item networks), which are shown to influence how users discover and interact with an item. Aside from the user conformity, aspects of confounding such as item popularity present in the network information is also captured in our method with the aid of causal disentanglement which unravels the learned representations into independent factors that are responsible for (a) modeling the exposure of an item to the user, (b) predicting the ratings, and (c) controlling the hidden confounders. Experiments on real-world datasets validate the effectiveness of the proposed model for debiasing recommender systems.

Research paper thumbnail of Evaluation Methods and Measures for Causal Learning Algorithms

IEEE Transactions on Artificial Intelligence, 2022

The convenient access to copious multi-faceted data has encouraged machine learning researchers t... more The convenient access to copious multi-faceted data has encouraged machine learning researchers to reconsider correlation-based learning and embrace the opportunity of causality-based learning, i.e., causal machine learning (causal learning). Recent years have therefore witnessed great effort in developing causal learning algorithms aiming to help AI achieve human-level intelligence. Due to the lack-of ground-truth data, one of the biggest challenges in current causal learning research is algorithm evaluations. This largely impedes the cross-pollination of AI and causal inference, and hinders the two fields to benefit from the advances of the other. To bridge from conventional causal inference (i.e., based on statistical methods) to causal learning with big data (i.e., the intersection of causal inference and machine learning), in this survey, we review commonly-used datasets, evaluation methods, and measures for causal learning using an evaluation pipeline similar to conventional machine learning. We focus on the two fundamental causal-inference tasks and causality-aware machine learning tasks. Limitations of current evaluation procedures are also discussed. We then examine popular causal inference tools/packages and conclude with primary challenges and opportunities for benchmarking causal learning algorithms in the era of big data. The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data. In doing so, we hope to broaden the discussions and facilitate collaboration to advance the innovation and application of causal learning. Impact Statement-Causal learning goes beyond machine learning due to its power of uncovering data generating processes. Causality relates to crucial open problems in machine learning. On the opposite, machine learning contributes to addressing fundamental challenges in causal inference. One key challenge of causal learning is that the research domain lacks public benchmark resources to support principled evaluation of research contributions. Our goal is to promote objectivity, reproducibility, fairness, collaboration, and awareness of bias in causal learning research. Arguing that this goal can only be achieved through systematic, objective, and transparent evaluation, in this survey, we provide a comprehensive review of the evaluation of fundamental tasks in causal inference and causality-aware machine learning tasks. Similar to the evaluation in conventional machine learning, the causal evaluation pipeline includes the evaluation protocols, metrics, datasets, and popular causal tools/packages. We also seek to expedite the marriage of causality and machine learning via discussions of prominent open problems and challenges.

Research paper thumbnail of A Computer Vision Framework for Partitioning of Image-Object Through Graph Theoretical Heuristic Approach

The aim of this work is to develop a graph theoretical computer vision framework to partition sha... more The aim of this work is to develop a graph theoretical computer vision framework to partition shape of an image object into parts based on a heuristic approach such that the partitioning remains consistent with human perception. The proposed framework employs a special polygonal approximation scheme to represent a shape suitably in simpler graph form where each polygonal side represents a graph-edge. The shape-representative graph is explored to determine vertex-visibility graph by a simple algorithm presented in this paper. We have proposed a heuristic based iterative clique extraction strategy to decompose the shape-representative graph depending on its vertex-visibility graph. This proposed framework considers MPEG-7 shape data set for probing the acceptability of the proposed framework and according to our observation, the performance of the framework is comparable with existing schemes.

Research paper thumbnail of Causal inference for time series analysis: problems, methods and evaluation

Knowledge and Information Systems, 2021

Time series data is a collection of chronological observations which are generated by several dom... more Time series data is a collection of chronological observations which are generated by several domains such as medical and financial fields. Over the years, different tasks such as classification, forecasting and clustering have been proposed to analyze this type of data. Time series data has been also used to study the effect of interventions overtime. Moreover, in many fields of science, learning the causal structure of dynamic systems and time series data is considered an interesting task which plays an important role in scientific discoveries. Estimating the effect of an intervention and identifying the causal relations from the data can be performed via causal inference. Existing surveys on time series discuss traditional tasks such as classification and forecasting or explain the details of the approaches proposed to solve a specific task. In this paper, we focus on two causal inference tasks, i.e., treatment effect estimation and causal discovery for time series data and provide a comprehensive review of the approaches in each task. Furthermore, we curate a list of commonly used evaluation metrics and datasets for each task and provide an in-depth insight. These metrics and datasets can serve as benchmark for research in the field.

Research paper thumbnail of Causal Learning for Socially Responsible AI

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable po... more There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable potential power. To make AI address ethical challenges and shun undesirable outcomes, researchers proposed to develop socially responsible AI (SRAI). One of these approaches is causal learning (CL). We survey state-of-the-art methods of CL for SRAI. We begin by examining the seven CL tools to enhance the social responsibility of AI, then review how existing works have succeeded using these tools to tackle issues in developing SRAI such as fairness. The goal of this survey is to bring forefront the potentials and promises of CL for SRAI.

Research paper thumbnail of CauseBox

Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021

Causal inference is a critical task in various fields such as healthcare, economics, marketing an... more Causal inference is a critical task in various fields such as healthcare, economics, marketing and education. Recently, there have been significant advances through the application of machine learning techniques, especially deep neural networks. Unfortunately, to-date many of the proposed methods are evaluated on different (data, software/hardware, hyperparameter) setups and consequently it is nearly impossible to compare the efficacy of the available methods or reproduce results presented in original research manuscripts. In this paper, we propose a causal inference toolbox (CauseBox) that addresses the aforementioned problems. At the time of publication, the toolbox includes seven state of the art causal inference methods and two benchmark datasets. By providing convenient command-line and GUI-based interfaces, the CauseBox toolbox helps researchers fairly compare the state of the art methods in their chosen application context against benchmark datasets. The code is made public at github.com/paras2612/CauseBox. CCS CONCEPTS • Computing methodologies → Causal reasoning and diagnostics; Machine learning algorithms; Neural networks; Learning latent representations.

Research paper thumbnail of Causal Disentanglement with Network Information for Debiased Recommendations

Recommender systems aim to recommend new items to users by learning user and item representations... more Recommender systems aim to recommend new items to users by learning user and item representations. In practice, these representations are highly entangled as they consist of information about multiple factors, including user's interests, item attributes along with confounding factors such as user conformity, and item popularity. Considering these entangled representations for inferring user preference may lead to biased recommendations (e.g., when the recommender model recommends popular items even if they do not align with the user's interests). Recent research proposes to debias by modeling a recommender system from a causal perspective. The exposure and the ratings are analogous to the treatment and the outcome in the causal inference framework, respectively. The critical challenge in this setting is accounting for the hidden confounders. These confounders are unobserved, making it hard to measure them. On the other hand, since these confounders affect both the exposure and the ratings, it is essential to account for them in generating debiased recommendations. To better approximate hidden confounders, we propose to leverage network information (i.e., user-social and user-item networks), which are shown to influence how users discover and interact with an item. Aside from the user conformity, aspects of confounding such as item popularity present in the network information is also captured in our method with the aid of causal disentanglement which unravels the learned representations into independent factors that are responsible for (a) modeling the exposure of an item to the user, (b) predicting the ratings, and (c) controlling the hidden confounders. Experiments on real-world datasets validate the effectiveness of the proposed model for debiasing recommender systems.

Research paper thumbnail of Evaluation Methods and Measures for Causal Learning Algorithms

IEEE Transactions on Artificial Intelligence, 2022

The convenient access to copious multi-faceted data has encouraged machine learning researchers t... more The convenient access to copious multi-faceted data has encouraged machine learning researchers to reconsider correlation-based learning and embrace the opportunity of causality-based learning, i.e., causal machine learning (causal learning). Recent years have therefore witnessed great effort in developing causal learning algorithms aiming to help AI achieve human-level intelligence. Due to the lack-of ground-truth data, one of the biggest challenges in current causal learning research is algorithm evaluations. This largely impedes the cross-pollination of AI and causal inference, and hinders the two fields to benefit from the advances of the other. To bridge from conventional causal inference (i.e., based on statistical methods) to causal learning with big data (i.e., the intersection of causal inference and machine learning), in this survey, we review commonly-used datasets, evaluation methods, and measures for causal learning using an evaluation pipeline similar to conventional machine learning. We focus on the two fundamental causal-inference tasks and causality-aware machine learning tasks. Limitations of current evaluation procedures are also discussed. We then examine popular causal inference tools/packages and conclude with primary challenges and opportunities for benchmarking causal learning algorithms in the era of big data. The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data. In doing so, we hope to broaden the discussions and facilitate collaboration to advance the innovation and application of causal learning. Impact Statement-Causal learning goes beyond machine learning due to its power of uncovering data generating processes. Causality relates to crucial open problems in machine learning. On the opposite, machine learning contributes to addressing fundamental challenges in causal inference. One key challenge of causal learning is that the research domain lacks public benchmark resources to support principled evaluation of research contributions. Our goal is to promote objectivity, reproducibility, fairness, collaboration, and awareness of bias in causal learning research. Arguing that this goal can only be achieved through systematic, objective, and transparent evaluation, in this survey, we provide a comprehensive review of the evaluation of fundamental tasks in causal inference and causality-aware machine learning tasks. Similar to the evaluation in conventional machine learning, the causal evaluation pipeline includes the evaluation protocols, metrics, datasets, and popular causal tools/packages. We also seek to expedite the marriage of causality and machine learning via discussions of prominent open problems and challenges.

Research paper thumbnail of A Computer Vision Framework for Partitioning of Image-Object Through Graph Theoretical Heuristic Approach

The aim of this work is to develop a graph theoretical computer vision framework to partition sha... more The aim of this work is to develop a graph theoretical computer vision framework to partition shape of an image object into parts based on a heuristic approach such that the partitioning remains consistent with human perception. The proposed framework employs a special polygonal approximation scheme to represent a shape suitably in simpler graph form where each polygonal side represents a graph-edge. The shape-representative graph is explored to determine vertex-visibility graph by a simple algorithm presented in this paper. We have proposed a heuristic based iterative clique extraction strategy to decompose the shape-representative graph depending on its vertex-visibility graph. This proposed framework considers MPEG-7 shape data set for probing the acceptability of the proposed framework and according to our observation, the performance of the framework is comparable with existing schemes.

Research paper thumbnail of Causal inference for time series analysis: problems, methods and evaluation

Knowledge and Information Systems, 2021

Time series data is a collection of chronological observations which are generated by several dom... more Time series data is a collection of chronological observations which are generated by several domains such as medical and financial fields. Over the years, different tasks such as classification, forecasting and clustering have been proposed to analyze this type of data. Time series data has been also used to study the effect of interventions overtime. Moreover, in many fields of science, learning the causal structure of dynamic systems and time series data is considered an interesting task which plays an important role in scientific discoveries. Estimating the effect of an intervention and identifying the causal relations from the data can be performed via causal inference. Existing surveys on time series discuss traditional tasks such as classification and forecasting or explain the details of the approaches proposed to solve a specific task. In this paper, we focus on two causal inference tasks, i.e., treatment effect estimation and causal discovery for time series data and provide a comprehensive review of the approaches in each task. Furthermore, we curate a list of commonly used evaluation metrics and datasets for each task and provide an in-depth insight. These metrics and datasets can serve as benchmark for research in the field.

Research paper thumbnail of Causal Learning for Socially Responsible AI

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable po... more There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable potential power. To make AI address ethical challenges and shun undesirable outcomes, researchers proposed to develop socially responsible AI (SRAI). One of these approaches is causal learning (CL). We survey state-of-the-art methods of CL for SRAI. We begin by examining the seven CL tools to enhance the social responsibility of AI, then review how existing works have succeeded using these tools to tackle issues in developing SRAI such as fairness. The goal of this survey is to bring forefront the potentials and promises of CL for SRAI.

Research paper thumbnail of CauseBox

Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021

Causal inference is a critical task in various fields such as healthcare, economics, marketing an... more Causal inference is a critical task in various fields such as healthcare, economics, marketing and education. Recently, there have been significant advances through the application of machine learning techniques, especially deep neural networks. Unfortunately, to-date many of the proposed methods are evaluated on different (data, software/hardware, hyperparameter) setups and consequently it is nearly impossible to compare the efficacy of the available methods or reproduce results presented in original research manuscripts. In this paper, we propose a causal inference toolbox (CauseBox) that addresses the aforementioned problems. At the time of publication, the toolbox includes seven state of the art causal inference methods and two benchmark datasets. By providing convenient command-line and GUI-based interfaces, the CauseBox toolbox helps researchers fairly compare the state of the art methods in their chosen application context against benchmark datasets. The code is made public at github.com/paras2612/CauseBox. CCS CONCEPTS • Computing methodologies → Causal reasoning and diagnostics; Machine learning algorithms; Neural networks; Learning latent representations.