Towards Utilizing Open Data for Interactive Knowledge Transfer (original) (raw)
Related papers
Proceedings of 3rd International Conference on Data Management Technologies and Applications, 2014
Data is everywhere, and non-expert users must be able to exploit it in order to extract knowledge, get insights and make well-informed decisions. The value of the discovered knowledge could be of greater value if it is available for later consumption and reusing. In this paper, we present the first version of the Knowledge Spring Process, an infrastructure that allows non-expert users to (i) apply user-friendly data mining techniques on open data sources, and (ii) share results as Linked Open Data (LOD). The main contribution of this paper is the concept of reusing the knowledge gained from data mining processes after being semantically annotated as LOD, then obtaining Linked Open Knowledge. Our Knowledge Spring Process is based on a model-driven viewpoint in order to easier deal with the wide diversity of open data formats.
Ontodia.org -a simple cognitive service to fill the gap in linked open data management tools
It has been already stated many times [1], [2], [3] that in the world of Semantic Web there is still a great need for practical tools that are accessible to common people with no special knowledge required. In fact, this lack of appropriate tools is frequently mentioned in publications and at open discussions [4]. As for the management, visualization and transformation of linked open data (LOD) the selection of tools is even more scarce. Practically, all the available options posses either one or several of these characteristics: { Bulky and complex (difficult to deploy, learn and maintain). { Developer-oriented (only a developer can launch and use). { Costly (most products’ prices start from $1500). { Neither intuitive nor visual. Cognitive features and visual-driven user interaction are almost unheard of among LOD users and in that sense they are greatly deprived of advanced software user interfaces (UI) when compared to traditional database data users.
Integrating Know-How into the Linked Data Cloud
This paper presents the first framework for integrating procedural knowledge, or “know-how”, into the Linked Data Cloud. Know-how available on the Web, such as step-by-step instructions, is largely unstructured and isolated from other sources of online knowledge. To overcome these limitations, we propose extending to procedural knowledge the benefits that Linked Data has already brought to representing, retrieving and reusing declarative knowledge. We describe a framework for representing generic know-how as Linked Data and for automatically acquiring this representation from existing resources on the Web. This system also allows the automatic generation of links between different know-how resources, and between those resources and other online knowledge bases, such as DBpedia. We discuss the results of applying this framework to a real-world scenario and we show how it outperforms existing manual community-driven integration efforts.
2009
A major obstacle impeding progress on the “web of data” is content creation—a difficult, tedious, and timeconsuming task. How do we make human-scalable, user-friendly tools to enable the web of data? Content integrity is also a major concern. How do we engender confidence in results returned from the web of data? Although seemingly unrelated, we show in this paper that it is exactly their relationship that is the key to solving both problems. As we show in this paper, we can semi-automatically derive both data and metadata from data-rich web pages to create a web of data that we then superimpose over these data-rich web pages. We link the web of data to the current web of pages, resulting in a higher-order “web of knowledge.” This web of knowledge provides provenance and thus engenders the confidence necessary to raise the level of the web from “data” to “knowledge.” We focus mainly on two prototype tools we have implemented: (1) TISP—a tool to automatically generate ontologies for ...
Simplifying knowledge creation and access for end-users on the SW
2008
In this position paper, we argue that improved mechanisms for knowledge acquisition and access on the semantic web (SW) will be necessary before it will be adopted widely by end-users. In particular, we propose an investigation surrounding improved languages for knowledge exchange, better UI mechanisms for interaction, and potential help from user modeling to enable accurate, efficient, SW knowledge modeling for everyone.
APOSDLE: learn@ work with semantic web technology
I- …, 2007
The EU project APOSDLE focuses on work-integrated learning. Among the several challenges of the project, a crucial role is played by the system's ability to start from the context of the immediate work of a user, establish her missing competencies and learning needs and suggest on-the-fly and appropriate learning stimuli. These learning stimuli are created from a variety of resources (documents, videos, expert profiles, and so on) already stored in the workplace and may be in the form of learning material or suggestions to contact experts and / or colleagues. To address this challenge requires the capability of building a system which is able find, choose, share, and combine a variety of knowledge, evolving content and resources in an automatic and effective manner. The implementation of this capability requires technology which goes beyond traditional query-answering and keyword based search engines, and Semantic Web technology was chosen by the consortium as the most appropriate technology to make information search and data integration more efficient. The aim of this paper is to give an overview of the broad spectrum of Semantic Web technologies that are needed for a complex application like APOSDLE, and the challenges for the Semantic Web community that have appeared along the way.
Interlinking Open Data on the Web
A fundamental prerequisite of the Semantic Web is the existence of large amounts of meaningfully interlinked RDF data on the Web. The W3C SWEO community project Linking Open Data has made various open datasets available on the Web as RDF, and developed automated mechanisms to interlink them with RDF statements. Collectively, the datasets currently consist of over one billion triples. We believe that large scale interlinking will demonstrate the value of the Semantic Web compared to more centralized approaches such as Google Base 5. This paper outlines the work to date and describes the accompanying demonstration. A functioning Semantic Web is predicated on the availability of large amounts of data as RDF; not in isolated islands but as a Web of interlinked datasets. To date this prerequisite has not been widely met, leading to criticism of the broader endeavour and hindering the progress of developers wishing to build Semantic Web applications. Thanks to the Open Data movement, a va...
ACM Transactions on Internet Technology
Data integration is the dominant use case for RDF Knowledge Graphs. However, Web resources come in formats with weak semantics (for example, CSV and JSON), or formats specific to a given application (for example, BibTex, HTML, and Markdown). To solve this problem, Knowledge Graph Construction (KGC) is gaining momentum due to its focus on supporting users in transforming data into RDF. However, using existing KGC frameworks result in complex data processing pipelines, which mix structural and semantic mappings, whose development and maintenance constitute a significant bottleneck for KG engineers. Such frameworks force users to rely on different tools, sometimes based on heterogeneous languages, for inspecting sources, designing mappings, and generating triples, thus making the process unnecessarily complicated. We argue that it is possible and desirable to equip KG engineers with the ability of interacting with Web data formats by relying on their expertise in RDF and the well-estab...
RKBExplorer.com: A Knowledge Driven Infrastructure for Linked Data Providers
Lecture Notes in Computer Science, 2008
RKB Explorer is a Semantic Web application that is able to present unified views of a significant number of heterogeneous data sources. We have developed an underlying information infrastructure which is mediated by ontologies and consists of many independent triplestores, each publicly available through both SPARQL endpoints and resolvable URIs. To realise this synergy of disparate information sources, we have deployed tools to identify co-referent URIs, and devised an architecture to allow the information to be represented and used. This paper provides a brief overview of the system including the underlying infrastructure, and a number of associated tools for both knowledge acquisition and publishing.