Catarina Moreira | Instituto Superior Tecnico (original) (raw)
Uploads
Papers by Catarina Moreira
Este artigo constitui a primeira análise bibliométrica das 708 comunicações publicadas pelo Congr... more Este artigo constitui a primeira análise bibliométrica das 708 comunicações publicadas pelo Congresso Nacional de Bibliotecários, Arquivistas e Documentalistas entre 1985 e 2012, tendo-se desenvolvido indicadores de produção, de produtividade, de proveniência institucional e análise temática, numa perspectiva quantitativa, relacional e diacrónica. Os seus resultados apresentam um Congresso dinâmico, essencialmente nacional e profissional, com uma forte preponderância da autoria individual, apesar do recente crescimento da taxa de colaboração. Na sua abordagem temática, é dado realce aos serviços públicos de informação, com maior destaque para o mundo das bibliotecas, mantendo também importância as reflexões sobre a formação profissional e académica na área da Ciência da Informação, bem como o acompanhamento dos mais recentes desenvolvimentos tecnológicos.
We analyse a quantum-like Bayesian Network that puts together cause/effect relationships and sema... more We analyse a quantum-like Bayesian Network that puts together cause/effect relationships and semantic similarities between events. These semantic similarities constitute acausal connections according to the Synchronicity principle and provide new relationships to quantum like probabilistic graphical models. As a consequence, beliefs (or any other event) can be represented in vector spaces, in which quantum parameters are determined by the similarities that these vectors share between them. Events attached by a semantic meaning do not need to have an explanation in terms of cause and effect.
We propose a new quantum Bayesian Network model in order to compute probabilistic inferences in d... more We propose a new quantum Bayesian Network model in order to compute probabilistic inferences in decision making scenarios. The application of a quantum paradigm to decision making generates interference effects that influence probabilistic inferences. These effects do not exist in a classical setting and constitute a major issue in the decision process, because they generate quantum parameters that highly increase with the amount of uncertainty of the problem. To automatically compute these quantum parameters, we propose a heuristic inspired by Jung’s Synchronicity principle. Synchronicity can be defined by a significant coincidence that appears between a mental state and an event occurring in the external world. It is the occurrence of meaningful, but not causally connected events. We tested our quantum Bayesian Network together with the Synchronicity inspired heuristic in empirical experiments related to categorization/decision in which the law of total probability was being violated. Results showed that the proposed quantum model was able to simulate the observed empirical findings from the experiments. We then applied our model to a more general scenario and showed the differences between classical and quantum inferences in a Lung Cancer medical diagnosis Bayesian Network.
We investigate exact indexing for high dimensional Lp norms based on the 1-Lipschitz property and... more We investigate exact indexing for high dimensional Lp norms based on the 1-Lipschitz property and projection operators.
The orthogonal projection that satisfies the 1-Lipschitz property for the Lp norm is described. The adaptive projection defined by the first principal component is introduced.
Probabilistic graphical models such as Bayesian Networks are one of the most powerful structures ... more Probabilistic graphical models such as Bayesian Networks are one of the most powerful structures known by the Computer Science community for deriving probabilistic inferences. However, modern cognitive psychology has revealed that human decisions could not follow the rules of classical probability theory, because humans cannot process large amounts of data in order to make judgements. Consequently, the inferences performed are based on limited data coupled with several heuristics, leading to violations of the law of total probability. This means that probabilistic graphical models based on classical probability theory are too limited to fully simulate and explain various aspects of human decision making.
Quantum probability theory was developed in order to accommodate the paradoxical findings that the classical theory could not explain. Recent findings in cognitive psychology revealed that quantum probability can fully describe human decisions in an elegant framework. Their findings suggest that, before taking a decision, human thoughts are seen as superposed waves that can interfere with each other, influencing the final decision.
In this work, we propose a new Bayesian Network based on the psychological findings of cognitive scientists. We made experiments with two very well known Bayesian Networks from the literature. The results obtained revealed that the quantum like Bayesian Network can affect drastically the probabilistic inferences, specially when the levels of uncertainty of the network are very high (no pieces of evidence observed). When the levels of uncertainty are very low, then the proposed quantum like network collapses to its classical counterpart.
Expert finding is an information retrieval task that is concerned with the search for the most kn... more Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people's activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches.
Expert finding is an information retrieval task concerned with the search for the most knowledgea... more Expert finding is an information retrieval task concerned with the search for the most knowledgeable people, in some topic, with basis on documents describing peoples activities. The task involves taking a user query as input and returning a list of people sorted by their level of expertise regarding the user query. This paper introduces a novel approach for combining multiple estimators of expertise based on a multisensor data fusion framework together with the Dempster-Shafer theory of evidence and Shannon's entropy. More specifically, we defined three sensors which detect heterogeneous information derived from the textual contents, from the graph structure of the citation patterns for the community of experts, and from profile information about the academic experts. Given the evidences collected, each sensor may define different candidates as experts and consequently do not agree in a final ranking decision. To deal with these conflicts, we applied the Dempster-Shafer theory of evidence combined with Shannon's Entropy formula to fuse this information and come up with a more accurate and reliable final ranking list. Experiments made over two datasets of academic publications from the Computer Science domain attest for the adequacy of the proposed approach over the traditional state of the art approaches. We also made experiments against representative supervised state of the art algorithms. Results revealed that the proposed method achieved a similar performance when compared to these supervised techniques, confirming the capabilities of the proposed framework.
This study presents an overview of IS&LS thematic trends in Portugal between 2001 and 2012. The r... more This study presents an overview of IS&LS thematic trends in Portugal between 2001 and 2012. The results were obtained by means of an analysis, using expeditious qualitative and quantitative techniques, of the bibliographic records of proceedings papers identified during this period. These records were processed using two techniques: a manual subject classification and an automated co-word analysis of the Author-Assigned Keywords. From this we designed cluster and co-occurrence maps, using the VOSviewer and the Pajek software packages.
The results indicated an accentuated dynamism in the thematic evolution of this documental corpus, apart from revealing a significant difference among the themes transmitted in nationally and internationally visible production.
The task of expert finding has been getting increasing attention in information retrieval literat... more The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
The task of expert finding has been getting increasing attention in information retrieval literat... more The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
When developing a conversational agent, there is often an urgent need to have a prototype availab... more When developing a conversational agent, there is often an urgent need to have a prototype available in order to test the application with real users. A Wizard of Oz is a possibility, but sometimes the agent should be simply deployed in the environment where it will be used. Here, the agent should be able to capture as many interactions as possible and to understand how people react to failure. In this paper, we focus on the rapid development of a natural language understanding module by non experts. Our approach follows the learning paradigm and sees the process of understanding natural language as a classification problem.
We test our module with a conversational agent that answers questions in the art domain. Moreover, we show how our approach can be used by a natural language interface to a cinema database.
Este artigo constitui a primeira análise bibliométrica das 708 comunicações publicadas pelo Congr... more Este artigo constitui a primeira análise bibliométrica das 708 comunicações publicadas pelo Congresso Nacional de Bibliotecários, Arquivistas e Documentalistas entre 1985 e 2012, tendo-se desenvolvido indicadores de produção, de produtividade, de proveniência institucional e análise temática, numa perspectiva quantitativa, relacional e diacrónica. Os seus resultados apresentam um Congresso dinâmico, essencialmente nacional e profissional, com uma forte preponderância da autoria individual, apesar do recente crescimento da taxa de colaboração. Na sua abordagem temática, é dado realce aos serviços públicos de informação, com maior destaque para o mundo das bibliotecas, mantendo também importância as reflexões sobre a formação profissional e académica na área da Ciência da Informação, bem como o acompanhamento dos mais recentes desenvolvimentos tecnológicos.
We analyse a quantum-like Bayesian Network that puts together cause/effect relationships and sema... more We analyse a quantum-like Bayesian Network that puts together cause/effect relationships and semantic similarities between events. These semantic similarities constitute acausal connections according to the Synchronicity principle and provide new relationships to quantum like probabilistic graphical models. As a consequence, beliefs (or any other event) can be represented in vector spaces, in which quantum parameters are determined by the similarities that these vectors share between them. Events attached by a semantic meaning do not need to have an explanation in terms of cause and effect.
We propose a new quantum Bayesian Network model in order to compute probabilistic inferences in d... more We propose a new quantum Bayesian Network model in order to compute probabilistic inferences in decision making scenarios. The application of a quantum paradigm to decision making generates interference effects that influence probabilistic inferences. These effects do not exist in a classical setting and constitute a major issue in the decision process, because they generate quantum parameters that highly increase with the amount of uncertainty of the problem. To automatically compute these quantum parameters, we propose a heuristic inspired by Jung’s Synchronicity principle. Synchronicity can be defined by a significant coincidence that appears between a mental state and an event occurring in the external world. It is the occurrence of meaningful, but not causally connected events. We tested our quantum Bayesian Network together with the Synchronicity inspired heuristic in empirical experiments related to categorization/decision in which the law of total probability was being violated. Results showed that the proposed quantum model was able to simulate the observed empirical findings from the experiments. We then applied our model to a more general scenario and showed the differences between classical and quantum inferences in a Lung Cancer medical diagnosis Bayesian Network.
We investigate exact indexing for high dimensional Lp norms based on the 1-Lipschitz property and... more We investigate exact indexing for high dimensional Lp norms based on the 1-Lipschitz property and projection operators.
The orthogonal projection that satisfies the 1-Lipschitz property for the Lp norm is described. The adaptive projection defined by the first principal component is introduced.
Probabilistic graphical models such as Bayesian Networks are one of the most powerful structures ... more Probabilistic graphical models such as Bayesian Networks are one of the most powerful structures known by the Computer Science community for deriving probabilistic inferences. However, modern cognitive psychology has revealed that human decisions could not follow the rules of classical probability theory, because humans cannot process large amounts of data in order to make judgements. Consequently, the inferences performed are based on limited data coupled with several heuristics, leading to violations of the law of total probability. This means that probabilistic graphical models based on classical probability theory are too limited to fully simulate and explain various aspects of human decision making.
Quantum probability theory was developed in order to accommodate the paradoxical findings that the classical theory could not explain. Recent findings in cognitive psychology revealed that quantum probability can fully describe human decisions in an elegant framework. Their findings suggest that, before taking a decision, human thoughts are seen as superposed waves that can interfere with each other, influencing the final decision.
In this work, we propose a new Bayesian Network based on the psychological findings of cognitive scientists. We made experiments with two very well known Bayesian Networks from the literature. The results obtained revealed that the quantum like Bayesian Network can affect drastically the probabilistic inferences, specially when the levels of uncertainty of the network are very high (no pieces of evidence observed). When the levels of uncertainty are very low, then the proposed quantum like network collapses to its classical counterpart.
Expert finding is an information retrieval task that is concerned with the search for the most kn... more Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people's activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches.
Expert finding is an information retrieval task concerned with the search for the most knowledgea... more Expert finding is an information retrieval task concerned with the search for the most knowledgeable people, in some topic, with basis on documents describing peoples activities. The task involves taking a user query as input and returning a list of people sorted by their level of expertise regarding the user query. This paper introduces a novel approach for combining multiple estimators of expertise based on a multisensor data fusion framework together with the Dempster-Shafer theory of evidence and Shannon's entropy. More specifically, we defined three sensors which detect heterogeneous information derived from the textual contents, from the graph structure of the citation patterns for the community of experts, and from profile information about the academic experts. Given the evidences collected, each sensor may define different candidates as experts and consequently do not agree in a final ranking decision. To deal with these conflicts, we applied the Dempster-Shafer theory of evidence combined with Shannon's Entropy formula to fuse this information and come up with a more accurate and reliable final ranking list. Experiments made over two datasets of academic publications from the Computer Science domain attest for the adequacy of the proposed approach over the traditional state of the art approaches. We also made experiments against representative supervised state of the art algorithms. Results revealed that the proposed method achieved a similar performance when compared to these supervised techniques, confirming the capabilities of the proposed framework.
This study presents an overview of IS&LS thematic trends in Portugal between 2001 and 2012. The r... more This study presents an overview of IS&LS thematic trends in Portugal between 2001 and 2012. The results were obtained by means of an analysis, using expeditious qualitative and quantitative techniques, of the bibliographic records of proceedings papers identified during this period. These records were processed using two techniques: a manual subject classification and an automated co-word analysis of the Author-Assigned Keywords. From this we designed cluster and co-occurrence maps, using the VOSviewer and the Pajek software packages.
The results indicated an accentuated dynamism in the thematic evolution of this documental corpus, apart from revealing a significant difference among the themes transmitted in nationally and internationally visible production.
The task of expert finding has been getting increasing attention in information retrieval literat... more The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence. This paper explores the usage of unsupervised rank aggregation methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure of the citation patterns for the community of experts, and from profile information about the experts. We specifically experimented two unsupervised rank aggregation approaches well known in the information retrieval literature, namely CombSUM and CombMNZ. Experiments made over a dataset of academic publications for the area of Computer Science attest for the adequacy of these methods.
The task of expert finding has been getting increasing attention in information retrieval literat... more The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
When developing a conversational agent, there is often an urgent need to have a prototype availab... more When developing a conversational agent, there is often an urgent need to have a prototype available in order to test the application with real users. A Wizard of Oz is a possibility, but sometimes the agent should be simply deployed in the environment where it will be used. Here, the agent should be able to capture as many interactions as possible and to understand how people react to failure. In this paper, we focus on the rapid development of a natural language understanding module by non experts. Our approach follows the learning paradigm and sees the process of understanding natural language as a classification problem.
We test our module with a conversational agent that answers questions in the art domain. Moreover, we show how our approach can be used by a natural language interface to a cinema database.