Toward ontology evaluation across the life cycle (original) (raw)

Towards ontology evaluation across the life cycle

Applied Ontology, 2013

Problem Currently, there is no agreed on methodology for development of ontologies, and there is no consensus on how ontologies should be evaluated. Consequently, evaluation techniques and tools are not widely utilized in the development of ontologies. This can lead to ontologies of poor quality and is an obstacle to the successful deployment of ontologies as a technology. Approach The goal of the Ontology Summit 2013 was to create guidance for ontology developers and users on how to evaluate ontologies. Over a period of four months a variety of approaches were discussed by participants, who represented a broad spectrum of ontology, software, and system developers and users. We explored how established best practices in system engineering and in software engineering can be utilized in ontology development. Results This document focuses on the evaluation of five aspects of the quality of ontologies: intelligibility, fidelity, craftsmanship, fitness, and deployability. A model for the ontology life cycle is presented, and evaluation criteria are presented in the context of the phases of the life cycle. We discuss the availability of tools and the document ends with observations and recommendations.

Meeting Report : “ NIST Workshop on Ontology Evaluation ”

2012

The National Institute for Standards and Technology sponsored a workshop in October, 2007, on the subject of ontology evaluation. An international group of invited experts met for two days to discuss problems in measuring ontology quality. The workshop highlighted several divisions among ontology developers regarding approaches to ontology evaluation. These divisions were generally reflective of the opinions of the participants. However, the workshop documented a paucity of empirical evidence in support of any particular position. Given the importance of ontologies to every knowledge-intensive human activity, there is an urgent need for research to develop an empirically derived knowledge base of best practices in ontology engineering and methods for assuring ontology quality over time. This is a report of the workshop discussion and brainstorming by the participants about what such a research program might look like.

The Evaluation of Ontologies

Semantic Web, 2007

Recent years have seen rapid progress in the development of ontologies as semantic models intended to capture and represent aspects of the real world. There is, however, great variation in the quality of ontologies. If ontologies are to become progressively better in the future, more rigorously developed, and more appropriately compared, then a systematic discipline of ontology evaluation must be created to ensure quality of content and methodology. Systematic methods for ontology evaluation will take into account representation of individual ontologies, performance (in terms of accuracy, domain coverage and the efficiency and quality of automated reasoning using the ontologies) on tasks for which the ontology is designed and used, degree of alignment with other ontologies and their compatibility with automated reasoning. A sound and systematic approach to ontology evaluation is required to transform ontology engineering into a true scientific and engineering discipline. This chapter discusses issues and problems in ontology evaluation, describes some current strategies, and suggests some approaches that might be useful in the future.

A Framework for Ontology Evaluation

Abstr act. The rapid growth in the number of ontologies has not met with the wide adoption of ontology in practice. Ontology evaluation can promote ontology use by facilitating the selection of a good ontology. Despite that a host of ontology evaluation methodologies are available, many of them are fragmentary and strongly tied to ontology development methodologies. Based on a review of extant ontology evaluation methods, we propose a framework for ontology evaluation. The framework provides a holistic view of ontology evaluation, suggesting both fundamental ontology dimensions and concrete criteria.

A snapshot of ontology evaluation criteria and strategies

Ontologies are key to information retrieval, semantic integration of datasets, and semantic similarity analyses. Evaluating ontologies (especially defi€ning what constitutes a “good” or “better” ontology) is therefore of central importance for the Semantic Web community. Various criteria have been introduced in the literature to evaluate ontologies, and this article classi€fies them according to their relevance to the design or the implementation phase of ontology development. In addition, the article compiles strategies for ontology evaluation based on ontologies published until 2017 in two outlets: the Semantic Web Journal, and the Journal of Web Semantics. Gaps and opportunities for future research on ontology evaluation are exposed towards the end of the paper.

NIST workshop on ontology evaluation

2011

The National Institute for Standards and Technology sponsored a workshop in October, 2007, on the subject of ontology evaluation. An international group of invited experts met for two days to discuss problems in measuring ontology quality. The workshop highlighted several divisions among ontology developers regarding approaches to ontology evaluation. These divisions were generally reflective of the opinions of the participants. However, the workshop documented a paucity of empirical evidence in support of any particular position. Given the importance of ontologies to every knowledge-intensive human activity, there is an urgent need for research to develop an empirically derived knowledge base of best practices in ontology engineering and methods for assuring ontology quality over time. This is a report of the workshop discussion and brainstorming by the participants about what such a research program might look like. ontologies: lack of a systematic method for evaluating ontologies, inadequate techniques for verification and validation, lack of standard methods for comparing ontologies, and paucity of real-world applications demonstrating effectiveness of ontologies. To address the issues above, a workshop was held at the National Institute of Standards and Technology on October 26 th and 27 th , 2007 to generate a research plan for the development of systematic methods for evaluating ontologies. The co-chairs of the workshop were Ram D. Sriram (National Institute of Standards and Technology), Mark A. Musen (Stanford University), and Carol A. Bean (National Institutes of Health). The topics for the workshop included the following:  Representation. The language in which an ontology is expressed (its metalanguage) should be used according to its intended syntax and semantics, to ensure that the ontology is properly understood by the user community and by computer-based tools. This topic addresses how to check that an ontology is using its metalanguage properly.  Accuracy. A well-constructed ontology is not very useful if its content is not accurate. This topic concerns methods to ensure that an ontology reflects the latest domain knowledge.  Reasoners. An ontology can support automatic computation of the knowledge that is otherwise not obvious in the ontology. This topic addresses how to determine that automatically deduced information is consistent and valid.  Performance metrics. Reasoners and other computational services are not very useful if they consume too many resources, including compute time. This topic concerns the bounds that users should expect from various kinds of computational services.  Tools and Testbeds. Ontology evaluation is a complex task that can be facilitated by testing environments, graphical tools, and automation of some aspects of evaluation. This topic addresses computer-aided ontology evaluation.  Certification. Ontologies that pass rigorous evaluation should be recognized by the community, to encourage the development and adoption of those of higher quality. This topic concerns the methods for official recognition of ontologies meeting high standards. Of particular concern is the role of social engineering to develop practices and tools that support the routine assessment and review of ontologies by the people who use them. The workshop had several presentations and breakout sessions. This report summarizes these presentations and breakout sessions. In our report of the discussions following each presentation, we use the abbreviation AM to connote an audience member, unless otherwise specified. Additional resources related to the workshop, including slides from each of the presentations are available at http://sites.google.com/a/cme.nist.gov/workshop-on-ontology-evaluation/Home/. Presentation Summaries Summaries of presentations, except Michael Uschold's talk entitled "Evaluating Ontologies based on Requirements," are provided below.

NIST Workshop on Ontology Evaluation (NISTIR 7774)

2011

The National Institute for Standards and Technology sponsored a workshop in October, 2007, on the subject of ontology evaluation. An international group of invited experts met for two days to discuss problems in measuring ontology quality. The workshop highlighted several divisions among ontology developers regarding approaches to ontology evaluation. These divisions were generally reflective of the opinions of the participants. However, the workshop documented a paucity of empirical evidence in support of any particular position. Given the importance of ontologies to every knowledge-intensive human activity, there is an urgent need for research to develop an empirically derived knowledge base of best practices in ontology engineering and methods for assuring ontology quality over time. This is a report of the workshop discussion and brainstorming by the participants about what such a research program might look like. ontologies: lack of a systematic method for evaluating ontologies, inadequate techniques for verification and validation, lack of standard methods for comparing ontologies, and paucity of real-world applications demonstrating effectiveness of ontologies. To address the issues above, a workshop was held at the National Institute of Standards and Technology on October 26 th and 27 th , 2007 to generate a research plan for the development of systematic methods for evaluating ontologies. The co-chairs of the workshop were Ram D. Sriram (National Institute of Standards and Technology), Mark A. Musen (Stanford University), and Carol A. Bean (National Institutes of Health). The topics for the workshop included the following:  Representation. The language in which an ontology is expressed (its metalanguage) should be used according to its intended syntax and semantics, to ensure that the ontology is properly understood by the user community and by computer-based tools. This topic addresses how to check that an ontology is using its metalanguage properly.  Accuracy. A well-constructed ontology is not very useful if its content is not accurate. This topic concerns methods to ensure that an ontology reflects the latest domain knowledge.  Reasoners. An ontology can support automatic computation of the knowledge that is otherwise not obvious in the ontology. This topic addresses how to determine that automatically deduced information is consistent and valid.  Performance metrics. Reasoners and other computational services are not very useful if they consume too many resources, including compute time. This topic concerns the bounds that users should expect from various kinds of computational services.  Tools and Testbeds. Ontology evaluation is a complex task that can be facilitated by testing environments, graphical tools, and automation of some aspects of evaluation. This topic addresses computer-aided ontology evaluation.  Certification. Ontologies that pass rigorous evaluation should be recognized by the community, to encourage the development and adoption of those of higher quality. This topic concerns the methods for official recognition of ontologies meeting high standards. Of particular concern is the role of social engineering to develop practices and tools that support the routine assessment and review of ontologies by the people who use them. The workshop had several presentations and breakout sessions. This report summarizes these presentations and breakout sessions. In our report of the discussions following each presentation, we use the abbreviation AM to connote an audience member, unless otherwise specified. Additional resources related to the workshop, including slides from each of the presentations are available at http://sites.google.com/a/cme.nist.gov/workshop-on-ontology-evaluation/Home/. Presentation Summaries Summaries of presentations, except Michael Uschold's talk entitled "Evaluating Ontologies based on Requirements," are provided below.

Assessing the Quality of Ontology

2011

In the Information Sciences an ontology specifies the conceptual structure of a knowledge domain through its vocabulary and its meaning, or semantics. Examples of ontologies include database schemas, taxonomies and library classification schemes. Computer-based ontologies specify a common understanding of a domain both for its different human users and for automated information processes such as interpretation, aggregation, manipulation, etc. As an ontology is fundamental to systems in an organisation and to communication between organisations, its quality is critically important for the sound operation of those systems. But there is a problem in that there seems to be no systematic basis or method for conducting an assessment of the quality of an ontology. There is an abundance of literature proposing ideas about what constitutes a 'good' ontology, but it is diverse, incoherent and untested. This paper reports research which reviewed the literature, extracted some 220 quali...

Methods for ontology evaluation

2005

A classification of methods and tools for the evaluation of ontologies for industrial practice is provided. It concerns methods and tools to:

The Evaluation of Ontologies | NIST

2006

Recent years have seen rapid progress in the development of ontologies as semantic models intended to capture and represent aspects of the real world. There is, however, great variation in the quality of ontologies. If ontologies are to become progressively better in the future, more rigorously developed, and more appropriately compared, then a systematic discipline of ontology evaluation must be created to ensure quality of content and methodology. Systematic methods for ontology evaluation will take into account representation of individual ontologies, performance and accuracy on tasks for which the ontology is designed and used, degree of alignment with other ontologies and their compatibility with automated reasoning. A sound and systematic approach to ontology evaluation is required to transform ontology engineering into a true scientific and engineering discipline. This chapter discusses issues and problems in ontology evaluation, describes some current strategies, and suggests some approaches that might be useful in the future.