Paulo Silva | Universidade Federal do Rio Grande do Sul (original) (raw)

Papers by Paulo Silva

Research paper thumbnail of Gradual trust and distrust in recommender systems

Fuzzy Sets and Systems, 2009

Trust networks among users of a recommender system (RS) prove beneficial to the quality and amoun... more Trust networks among users of a recommender system (RS) prove beneficial to the quality and amount of the recommendations. Since trust is often a gradual phenomenon, fuzzy relations are the pre-eminent tools for modeling such networks. However, as current trust-enhanced RSs do not work with the notion of distrust, they cannot differentiate unknown users from malicious users, nor represent inconsistency. These are serious drawbacks in large networks where many users are unknown to each other and might provide contradictory information. In this paper, we advocate the use of a trust model in which trust scores are (trust,distrust)-couples, drawn from a bilattice that preserves valuable trust provenance information including gradual trust, distrust, ignorance, and inconsistency. We pay particular attention to deriving trust information through a trusted third party, which becomes especially challenging when also distrust is involved.

Research paper thumbnail of A Many Valued Representation and Propagation of Trust and Distrust

As the amount of information on the web grows, users may find increasing challenges in trusting a... more As the amount of information on the web grows, users may find increasing challenges in trusting and sometimes distrusting sources. One possible aid is to maintain a network of trust between sources. In this paper, we propose to model such a network as an intuitionistic fuzzy relation. This allows to elegantly handle together the problem of ignorance, i.e. not knowing whether to trust or not, and vagueness, i.e. trust as a matter of degree. We pay special attention to deriving trust information through a trusted third party, which becomes especially challenging when distrust is involved.

Research paper thumbnail of Notational Support for the Design of Augmented Reality Systems

There is growing interest in augmented reality (AR) as technologies are developed that enable eve... more There is growing interest in augmented reality (AR) as technologies are developed that enable ever smoother integration of computer capabilities into the physical objects that populate the everyday lives of users. However, despite this growing importance of AR technologies, there is little tool support for the design of AR systems. In this paper, we present two notations, ASUR and UMLi, that can be used to capture design-significant features of AR systems. ASUR is a notation for designing user interactions in AR environments. UMLi is a notation for designing the user interfaces to interactive systems. We use each notation to specify the design of an augmented museum gallery. We then compare the two notations in terms of the types of support they provide and consider how they might be used together.

Research paper thumbnail of Teallach: a model-based user interface development environment for object databases

Interacting with Computers, 2001

Model-based user interface development environments show promise for improving the productivity o... more Model-based user interface development environments show promise for improving the productivity of user interface developers, and possibly for improving the quality of developed interfaces. While model-based techniques have previously been applied to the area of database interfaces, they have not been speci®cally targeted at the important area of object database applications. Such applications make use of models that are semantically richer than their relational counterparts in terms of both data structures and application functionality. In general, model-based techniques have not addressed how the information referenced in such applications is manifested within the described models, and is utilised within the generated interface itself. This lack of experience with such systems has led to many model-based projects providing minimal support for certain features that are essential to such data intensive applications, and has prevented object database interface developers in particular from bene®ting from model-based techniques. This paper presents the Teallach model-based user interface development environment for object databases, describing the models it supports, the relationships between these models, the tool used to construct interfaces using the models and the generation of Java programs from the declarative models. Distinctive features of Teallach include comprehensive facilities for linking models, a¯exible development method, an open architecture, and the generation of running applications based on the models constructed by designers. q

Research paper thumbnail of UMLi: The Unified Modeling Language for Interactive Applications

User interfaces (UIs) are essential components of most software systems, and significantly affect... more User interfaces (UIs) are essential components of most software systems, and significantly affect the effectiveness of installed applications. In addition, UIs often represent a significant proportion of the code delivered by a development activity. However, despite this, there are no modelling languages and tools that support contract elaboration between UI developers and application developers. The Unified Modeling Language (UML) has been widely accepted by application developers, but not so much by UI designers. For this reason, this paper introduces the notation of the Unified Modelling Language for Interactive Applications (UMLi), that extends UML, to provide greater support for UI design. UI elements elicited in use cases and their scenarios can be used during the design of activities and UI presentations. A diagram notation for modelling user interface presentations is introduced. Activity diagram notation is extended to describe collaboration between interaction and domain objects. Further, a case study using UMLi notation and method is presented.

Research paper thumbnail of User Interface Modelling with UML

The Unified Modeling Language (UML) is a natural candidate for user interface (UI) modelling sinc... more The Unified Modeling Language (UML) is a natural candidate for user interface (UI) modelling since it is the standard notation for object oriented modelling of applications. However, it is by no means clear how to model UIs using UML. This paper presents a user interface modelling case study using UML. This case study identifies some aspects of UIs that cannot be modelled using UML notation, and a set of UML constructors that may be used to model UIs. The modelling problems indicate some weaknesses of UML for modelling UIs, while the constructors exploited indicate some strengths. The identification of such strengths and weaknesses can be used in the formulation of a strategy for extending UML to provide greater support for user interface design.

Research paper thumbnail of Knowledge Provenance Infrastructure

IEEE Data(base) Engineering Bulletin, 2003

Abstract The web lacks support for explaining information provenance. When web applications retur... more Abstract The web lacks support for explaining information provenance. When web applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Support for information provenance is expected to be a harder problem in the Semantic Web where more answers result from some manipulation of information (instead of simple retrieval of information). Manipulation includes, among other things, ...

Research paper thumbnail of Explaining answers from the Semantic Web: the Inference Web approach

Journal of Web Semantics, 2004

The Semantic Web lacks support for explaining answers from web applications. When applications re... more The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know how implicit answers were derived. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing infrastructure for presenting and managing explanations. The explanations include information concerning where answers came from (knowledge provenance) and how they were derived (or retrieved). In this article we describe an infrastructure for IW explanations. The infrastructure includes: IWBase -an extensible web-based registry containing details about information sources, reasoners, languages, and rewrite rules; PML -the Proof Markup Language specification and API used for encoding portable proofs; IW browser -a tool supporting navigation and presentations of proofs and their explanations; and a new explanation dialogue component. Source information in the IWBase is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IWBase are used to support proofs, proof combination, and Semantic Web agent interoperability. The Inference Web is in use by four Semantic Web agents, three of them using embedded reasoning engines fully registered in the IW. Inference Web also provides explanation infrastructure for a number of DARPA and ARDA projects.

Research paper thumbnail of A proof markup language for Semantic Web services

Information Systems, 2006

The Semantic Web is being designed to enable automated reasoners to be used as core components in... more The Semantic Web is being designed to enable automated reasoners to be used as core components in a wide variety of Web applications and services. In order for a client to accept and trust a result produced by perhaps an unfamiliar Web service, the result needs to be accompanied by a justification that is understandable and usable by the client. in this paper, we describe the Proof Markup Language (PML), an interlingua representation for justifications of results produced by Semantic Web services. We also introduce our Inference Web infrastructure that uses PML as the foundation for providing explanations of Web services to end users. We additionally show how PML is critical for and provides the foundation for hybrid reasoning where results are produced cooperatively by multiple reasoners. Our contributions in this paper focus on technological foundations for capturing formal representations of term meaning and justification descriptions thereby facilitating trust and reuse of answers from web agents.

Research paper thumbnail of IWTrust: Improving User Trust in Answers from the Web

Question answering systems users may find answers without any supporting information insufficient... more Question answering systems users may find answers without any supporting information insufficient for determining trust levels. Once those question answering systems begin to rely on source information that varies greatly in quality and depth, such as is typical in web settings, users may trust answers even less. We address this problem by augmenting answers with optional information about the sources that were used in the answer generation process. In addition, we introduce a trust infrastructure, IWTrust, which enables computations of trust values for answers from the Web. Users of IWTrust have access to sources used in answer computation along with trust values for those source, thus they are better able to judge answer trustworthiness. Our work builds upon existing Inference Web components for representing and maintaining proofs and proof related information justifying answers. It includes a new TrustNet component for managing trust relations and for computing trust values. This paper also introduces the Inference Web answer trust computation algorithm and presents an example of its use for ranking answers and justifications by trust.

Research paper thumbnail of Registry-Based Support for Information Integration

In order for agents and humans to leverage the growing wealth of heterogeneous information and se... more In order for agents and humans to leverage the growing wealth of heterogeneous information and services on the web, increasingly, they need to understand the information that is delivered to them. In the simplest case, an agent or human is retrieving "look-up" information and would benefit from having access to provenance information concerning recency, source authoritativeness, etc. In more complicated situations where information is manipulated before it is returned as an answer, agents and humans would benefit from understanding the derivations and assumptions used. When services are involved, users and agents also would benefit from understanding what actions could be or were executed on the user's behalf. In this paper, we introduce a strategy for registering information sources and question answering systems providing support for implementing distributed and cooperative web services. In this paper, we describe the inference web infrastructure that supports explanations in distributed environments such as the web and describe the elements of its registry.

Research paper thumbnail of Investigations into Trust for Collaborative Information Repositories: A Wikipedia Case Study

As collaborative repositories grow in popularity and use, issues concerning the quality and trust... more As collaborative repositories grow in popularity and use, issues concerning the quality and trustworthiness of information grow. Some current popular repositories contain contributions from a wide variety of users, many of which will be unknown to a potential end user. Additionally the content may change rapidly and information that was previously contributed by a known user may be updated by an unknown user. End users are now faced with more challenges as they evaluate how much they may want to rely on information that was generated and updated in this manner. A trust management layer has become an important requirement for the continued growth and acceptance of collaboratively developed and maintained information resources. In this paper, we will describe our initial investigations into designing and implementing an extensible trust management layer for collaborative and/or aggregated repositories of information. We leverage our work on the Inference Web explanation infrastructure and exploit and expand the Proof Markup Language to handle a simple notion of trust. Our work is designed to support representation, computation, and visualization of trust information. We have grounded our work in the setting of Wikipedia. In this paper, we present our vision, expose motivations, relate work to date on trust representation, and present a trust computation algorithm with experimental results. We also discuss some issues encountered in our work that we found interesting.

Research paper thumbnail of Tracking RDF Graph Provenance using RDF Molecules

The Semantic Web can be viewed as one large "universal" RDF graph distributed across many Web pag... more The Semantic Web can be viewed as one large "universal" RDF graph distributed across many Web pages. This is an impractical for many reasons, so we usually work with a decomposition into RDF documents, each of which corresponds to an individual Web page. While this is natural and appropriate for most tasks, it is still too coarse for some. For example, many RDF documents may redundantly contain the same data and some documents comprise large amounts of weakly-related or unrelated data. Decomposing a document into its RDF triples is usually too fine a decomposition, information may be lost if the graph contains blank nodes. We define an intermediate decomposition of an RDF graph G into a set of RDF "molecules", each of which is a connected sub-graph of the original. The decomposition is "lossless" in that the molecules can be recombined to yield G even if their blank nodes IDs are "standardized apart". RDF molecules provide a useful granularity for tracking the provenance of or evidence for information found in an RDF graph. Doing so at the document level (e.g., find other documents with identical graphs) may find too few matches. Working at the triple level will just fail for any triples containing blank nodes. RDF molecules are the finest granularity at which we can do this without loss of information. We define the RDF molecule concept in more detail, describe an algorithm to decompose an RDF graph into its molecules, and show how these can be used to find evidence to support the original graph. The decomposition algorithm and the provenance application have both been prototyped in a simple Web-based demonstration.

Research paper thumbnail of Explaining Conclusions from Diverse Knowledge Sources

The ubiquitous non-semantic web includes a vast array of unstructured information such as HTML do... more The ubiquitous non-semantic web includes a vast array of unstructured information such as HTML documents. The semantic web provides more structured knowledge such as hand-built ontologies and semantically aware databases. To leverage the full power of both the semantic and non-semantic portions of the web, software systems need to be able to reason over both kinds of information. Systems that use both structured and unstructured information face a significant challenge when trying to convince a user to believe their results: the sources and the kinds of reasoning that are applied to the sources are radically different in their nature and their reliability. Our work aims at explaining conclusions derived from a combination of structured and unstructured sources. We present our solution that provides an infrastructure capable of encoding justifications for conclusions in a single format. This integration provides an end-to-end description of the knowledge derivation process including access to text or HTML documents, descriptions of the analytic processes used for extraction, as well as descriptions of the ontologies and many kinds of information manipulation processes, including standard deduction. We produce unified traces of extraction and deduction processes in the Proof Markup Language (PML), an OWL-based formalism for encoding provenance for inferred information. We provide a browser for exploring PML and thus enabling a user to understand how some conclusion was reached.

Research paper thumbnail of Infrastructure for Web Explanations

The Semantic Web lacks support for explaining knowledge provenance. When web applications return ... more The Semantic Web lacks support for explaining knowledge provenance. When web applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. The Semantic Web also lacks support for explaining reasoning paths used to derive answers. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing explanations. The explanations include information concerning where answers came from and how they were derived (or retrieved). In this paper we describe an infrastructure for IW explanations. The infrastructure includes: an extensible web-based registry containing details on information sources, reasoners, languages, and rewrite rules; a portable proof specification; and a proof and explanation browser. Source information in the IW registry is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IW registry are used to support proofs, proof combination, and semantic web agent interoperability. The IW browser is used to support navigation and presentations of proofs and their explanations. The Inference Web is in use by two Semantic Web agents using an embedded reasoning engine fully registered in the IW. Additional reasoning engine registration is underway in order to help provide input for evaluation of the adequacy, breadth, and scalability of our approach.

Research paper thumbnail of Saulo ramos

De acordo com o Projeto Leitura Multidisciplinar do Curso de Direito, o aluno terá que fazer a le... more De acordo com o Projeto Leitura Multidisciplinar do Curso de Direito, o aluno terá que fazer a leitura do livro: RAMOS, Saulo. Código da Vida. Na composição da nota da N2, Faça uma resenha crítica da leitura do livro citado, obedecendo aos seguintes critérios: 1 -Essa atividade tem como objetivo trabalhar a área de formação geral explorando habilidades e competências em desenvolver leitura, interpretação e produção de texto, ampliando e contribuindo com sua formação acadêmica.

Research paper thumbnail of Gradual trust and distrust in recommender systems

Fuzzy Sets and Systems, 2009

Trust networks among users of a recommender system (RS) prove beneficial to the quality and amoun... more Trust networks among users of a recommender system (RS) prove beneficial to the quality and amount of the recommendations. Since trust is often a gradual phenomenon, fuzzy relations are the pre-eminent tools for modeling such networks. However, as current trust-enhanced RSs do not work with the notion of distrust, they cannot differentiate unknown users from malicious users, nor represent inconsistency. These are serious drawbacks in large networks where many users are unknown to each other and might provide contradictory information. In this paper, we advocate the use of a trust model in which trust scores are (trust,distrust)-couples, drawn from a bilattice that preserves valuable trust provenance information including gradual trust, distrust, ignorance, and inconsistency. We pay particular attention to deriving trust information through a trusted third party, which becomes especially challenging when also distrust is involved.

Research paper thumbnail of A Many Valued Representation and Propagation of Trust and Distrust

As the amount of information on the web grows, users may find increasing challenges in trusting a... more As the amount of information on the web grows, users may find increasing challenges in trusting and sometimes distrusting sources. One possible aid is to maintain a network of trust between sources. In this paper, we propose to model such a network as an intuitionistic fuzzy relation. This allows to elegantly handle together the problem of ignorance, i.e. not knowing whether to trust or not, and vagueness, i.e. trust as a matter of degree. We pay special attention to deriving trust information through a trusted third party, which becomes especially challenging when distrust is involved.

Research paper thumbnail of Notational Support for the Design of Augmented Reality Systems

There is growing interest in augmented reality (AR) as technologies are developed that enable eve... more There is growing interest in augmented reality (AR) as technologies are developed that enable ever smoother integration of computer capabilities into the physical objects that populate the everyday lives of users. However, despite this growing importance of AR technologies, there is little tool support for the design of AR systems. In this paper, we present two notations, ASUR and UMLi, that can be used to capture design-significant features of AR systems. ASUR is a notation for designing user interactions in AR environments. UMLi is a notation for designing the user interfaces to interactive systems. We use each notation to specify the design of an augmented museum gallery. We then compare the two notations in terms of the types of support they provide and consider how they might be used together.

Research paper thumbnail of Teallach: a model-based user interface development environment for object databases

Interacting with Computers, 2001

Model-based user interface development environments show promise for improving the productivity o... more Model-based user interface development environments show promise for improving the productivity of user interface developers, and possibly for improving the quality of developed interfaces. While model-based techniques have previously been applied to the area of database interfaces, they have not been speci®cally targeted at the important area of object database applications. Such applications make use of models that are semantically richer than their relational counterparts in terms of both data structures and application functionality. In general, model-based techniques have not addressed how the information referenced in such applications is manifested within the described models, and is utilised within the generated interface itself. This lack of experience with such systems has led to many model-based projects providing minimal support for certain features that are essential to such data intensive applications, and has prevented object database interface developers in particular from bene®ting from model-based techniques. This paper presents the Teallach model-based user interface development environment for object databases, describing the models it supports, the relationships between these models, the tool used to construct interfaces using the models and the generation of Java programs from the declarative models. Distinctive features of Teallach include comprehensive facilities for linking models, a¯exible development method, an open architecture, and the generation of running applications based on the models constructed by designers. q

Research paper thumbnail of UMLi: The Unified Modeling Language for Interactive Applications

User interfaces (UIs) are essential components of most software systems, and significantly affect... more User interfaces (UIs) are essential components of most software systems, and significantly affect the effectiveness of installed applications. In addition, UIs often represent a significant proportion of the code delivered by a development activity. However, despite this, there are no modelling languages and tools that support contract elaboration between UI developers and application developers. The Unified Modeling Language (UML) has been widely accepted by application developers, but not so much by UI designers. For this reason, this paper introduces the notation of the Unified Modelling Language for Interactive Applications (UMLi), that extends UML, to provide greater support for UI design. UI elements elicited in use cases and their scenarios can be used during the design of activities and UI presentations. A diagram notation for modelling user interface presentations is introduced. Activity diagram notation is extended to describe collaboration between interaction and domain objects. Further, a case study using UMLi notation and method is presented.

Research paper thumbnail of User Interface Modelling with UML

The Unified Modeling Language (UML) is a natural candidate for user interface (UI) modelling sinc... more The Unified Modeling Language (UML) is a natural candidate for user interface (UI) modelling since it is the standard notation for object oriented modelling of applications. However, it is by no means clear how to model UIs using UML. This paper presents a user interface modelling case study using UML. This case study identifies some aspects of UIs that cannot be modelled using UML notation, and a set of UML constructors that may be used to model UIs. The modelling problems indicate some weaknesses of UML for modelling UIs, while the constructors exploited indicate some strengths. The identification of such strengths and weaknesses can be used in the formulation of a strategy for extending UML to provide greater support for user interface design.

Research paper thumbnail of Knowledge Provenance Infrastructure

IEEE Data(base) Engineering Bulletin, 2003

Abstract The web lacks support for explaining information provenance. When web applications retur... more Abstract The web lacks support for explaining information provenance. When web applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Support for information provenance is expected to be a harder problem in the Semantic Web where more answers result from some manipulation of information (instead of simple retrieval of information). Manipulation includes, among other things, ...

Research paper thumbnail of Explaining answers from the Semantic Web: the Inference Web approach

Journal of Web Semantics, 2004

The Semantic Web lacks support for explaining answers from web applications. When applications re... more The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know how implicit answers were derived. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing infrastructure for presenting and managing explanations. The explanations include information concerning where answers came from (knowledge provenance) and how they were derived (or retrieved). In this article we describe an infrastructure for IW explanations. The infrastructure includes: IWBase -an extensible web-based registry containing details about information sources, reasoners, languages, and rewrite rules; PML -the Proof Markup Language specification and API used for encoding portable proofs; IW browser -a tool supporting navigation and presentations of proofs and their explanations; and a new explanation dialogue component. Source information in the IWBase is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IWBase are used to support proofs, proof combination, and Semantic Web agent interoperability. The Inference Web is in use by four Semantic Web agents, three of them using embedded reasoning engines fully registered in the IW. Inference Web also provides explanation infrastructure for a number of DARPA and ARDA projects.

Research paper thumbnail of A proof markup language for Semantic Web services

Information Systems, 2006

The Semantic Web is being designed to enable automated reasoners to be used as core components in... more The Semantic Web is being designed to enable automated reasoners to be used as core components in a wide variety of Web applications and services. In order for a client to accept and trust a result produced by perhaps an unfamiliar Web service, the result needs to be accompanied by a justification that is understandable and usable by the client. in this paper, we describe the Proof Markup Language (PML), an interlingua representation for justifications of results produced by Semantic Web services. We also introduce our Inference Web infrastructure that uses PML as the foundation for providing explanations of Web services to end users. We additionally show how PML is critical for and provides the foundation for hybrid reasoning where results are produced cooperatively by multiple reasoners. Our contributions in this paper focus on technological foundations for capturing formal representations of term meaning and justification descriptions thereby facilitating trust and reuse of answers from web agents.

Research paper thumbnail of IWTrust: Improving User Trust in Answers from the Web

Question answering systems users may find answers without any supporting information insufficient... more Question answering systems users may find answers without any supporting information insufficient for determining trust levels. Once those question answering systems begin to rely on source information that varies greatly in quality and depth, such as is typical in web settings, users may trust answers even less. We address this problem by augmenting answers with optional information about the sources that were used in the answer generation process. In addition, we introduce a trust infrastructure, IWTrust, which enables computations of trust values for answers from the Web. Users of IWTrust have access to sources used in answer computation along with trust values for those source, thus they are better able to judge answer trustworthiness. Our work builds upon existing Inference Web components for representing and maintaining proofs and proof related information justifying answers. It includes a new TrustNet component for managing trust relations and for computing trust values. This paper also introduces the Inference Web answer trust computation algorithm and presents an example of its use for ranking answers and justifications by trust.

Research paper thumbnail of Registry-Based Support for Information Integration

In order for agents and humans to leverage the growing wealth of heterogeneous information and se... more In order for agents and humans to leverage the growing wealth of heterogeneous information and services on the web, increasingly, they need to understand the information that is delivered to them. In the simplest case, an agent or human is retrieving "look-up" information and would benefit from having access to provenance information concerning recency, source authoritativeness, etc. In more complicated situations where information is manipulated before it is returned as an answer, agents and humans would benefit from understanding the derivations and assumptions used. When services are involved, users and agents also would benefit from understanding what actions could be or were executed on the user's behalf. In this paper, we introduce a strategy for registering information sources and question answering systems providing support for implementing distributed and cooperative web services. In this paper, we describe the inference web infrastructure that supports explanations in distributed environments such as the web and describe the elements of its registry.

Research paper thumbnail of Investigations into Trust for Collaborative Information Repositories: A Wikipedia Case Study

As collaborative repositories grow in popularity and use, issues concerning the quality and trust... more As collaborative repositories grow in popularity and use, issues concerning the quality and trustworthiness of information grow. Some current popular repositories contain contributions from a wide variety of users, many of which will be unknown to a potential end user. Additionally the content may change rapidly and information that was previously contributed by a known user may be updated by an unknown user. End users are now faced with more challenges as they evaluate how much they may want to rely on information that was generated and updated in this manner. A trust management layer has become an important requirement for the continued growth and acceptance of collaboratively developed and maintained information resources. In this paper, we will describe our initial investigations into designing and implementing an extensible trust management layer for collaborative and/or aggregated repositories of information. We leverage our work on the Inference Web explanation infrastructure and exploit and expand the Proof Markup Language to handle a simple notion of trust. Our work is designed to support representation, computation, and visualization of trust information. We have grounded our work in the setting of Wikipedia. In this paper, we present our vision, expose motivations, relate work to date on trust representation, and present a trust computation algorithm with experimental results. We also discuss some issues encountered in our work that we found interesting.

Research paper thumbnail of Tracking RDF Graph Provenance using RDF Molecules

The Semantic Web can be viewed as one large "universal" RDF graph distributed across many Web pag... more The Semantic Web can be viewed as one large "universal" RDF graph distributed across many Web pages. This is an impractical for many reasons, so we usually work with a decomposition into RDF documents, each of which corresponds to an individual Web page. While this is natural and appropriate for most tasks, it is still too coarse for some. For example, many RDF documents may redundantly contain the same data and some documents comprise large amounts of weakly-related or unrelated data. Decomposing a document into its RDF triples is usually too fine a decomposition, information may be lost if the graph contains blank nodes. We define an intermediate decomposition of an RDF graph G into a set of RDF "molecules", each of which is a connected sub-graph of the original. The decomposition is "lossless" in that the molecules can be recombined to yield G even if their blank nodes IDs are "standardized apart". RDF molecules provide a useful granularity for tracking the provenance of or evidence for information found in an RDF graph. Doing so at the document level (e.g., find other documents with identical graphs) may find too few matches. Working at the triple level will just fail for any triples containing blank nodes. RDF molecules are the finest granularity at which we can do this without loss of information. We define the RDF molecule concept in more detail, describe an algorithm to decompose an RDF graph into its molecules, and show how these can be used to find evidence to support the original graph. The decomposition algorithm and the provenance application have both been prototyped in a simple Web-based demonstration.

Research paper thumbnail of Explaining Conclusions from Diverse Knowledge Sources

The ubiquitous non-semantic web includes a vast array of unstructured information such as HTML do... more The ubiquitous non-semantic web includes a vast array of unstructured information such as HTML documents. The semantic web provides more structured knowledge such as hand-built ontologies and semantically aware databases. To leverage the full power of both the semantic and non-semantic portions of the web, software systems need to be able to reason over both kinds of information. Systems that use both structured and unstructured information face a significant challenge when trying to convince a user to believe their results: the sources and the kinds of reasoning that are applied to the sources are radically different in their nature and their reliability. Our work aims at explaining conclusions derived from a combination of structured and unstructured sources. We present our solution that provides an infrastructure capable of encoding justifications for conclusions in a single format. This integration provides an end-to-end description of the knowledge derivation process including access to text or HTML documents, descriptions of the analytic processes used for extraction, as well as descriptions of the ontologies and many kinds of information manipulation processes, including standard deduction. We produce unified traces of extraction and deduction processes in the Proof Markup Language (PML), an OWL-based formalism for encoding provenance for inferred information. We provide a browser for exploring PML and thus enabling a user to understand how some conclusion was reached.

Research paper thumbnail of Infrastructure for Web Explanations

The Semantic Web lacks support for explaining knowledge provenance. When web applications return ... more The Semantic Web lacks support for explaining knowledge provenance. When web applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. The Semantic Web also lacks support for explaining reasoning paths used to derive answers. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing explanations. The explanations include information concerning where answers came from and how they were derived (or retrieved). In this paper we describe an infrastructure for IW explanations. The infrastructure includes: an extensible web-based registry containing details on information sources, reasoners, languages, and rewrite rules; a portable proof specification; and a proof and explanation browser. Source information in the IW registry is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IW registry are used to support proofs, proof combination, and semantic web agent interoperability. The IW browser is used to support navigation and presentations of proofs and their explanations. The Inference Web is in use by two Semantic Web agents using an embedded reasoning engine fully registered in the IW. Additional reasoning engine registration is underway in order to help provide input for evaluation of the adequacy, breadth, and scalability of our approach.

Research paper thumbnail of Saulo ramos

De acordo com o Projeto Leitura Multidisciplinar do Curso de Direito, o aluno terá que fazer a le... more De acordo com o Projeto Leitura Multidisciplinar do Curso de Direito, o aluno terá que fazer a leitura do livro: RAMOS, Saulo. Código da Vida. Na composição da nota da N2, Faça uma resenha crítica da leitura do livro citado, obedecendo aos seguintes critérios: 1 -Essa atividade tem como objetivo trabalhar a área de formação geral explorando habilidades e competências em desenvolver leitura, interpretação e produção de texto, ampliando e contribuindo com sua formação acadêmica.