Arne Berre | SINTEF - Academia.edu (original) (raw)

Papers by Arne Berre

Research paper thumbnail of SBVR as a Semantic Hub for Integration of Heterogeneous Systems

Rules and Rule Markup Languages for the Semantic Web, 2013

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Standards and Initiatives for Service Modeling - The Case of OMG SoaML

Service modeling is a key element of any service-oriented system. It is the foundation on which c... more Service modeling is a key element of any service-oriented system. It is the foundation on which core service-related tasks such as service discovery, composition, and mediation rely. During the past years standardization bodies such as W3C, OMG and OASIS have been working on standardizing various aspects of services such as service functionalities, behavior, quality of services, etc. At the same time, initiatives from academia focused on developing ontologies and formal languages for specifying services. In this paper we give a brief overview of relevant initiatives and standardization activities in the area of service modeling, and, as an example of the use of such standards, guide the reader through the use of the OMG Service oriented architecture Modeling Language (SoaML) in a concrete service-oriented scenario in the manufacturing domain.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Open Business Model, Process and Service Innovation with VDML and ServiceML

John Wiley & Sons, Ltd eBooks, Apr 11, 2014

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data in Bioeconomy

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Hackathons as A Capacity Building Tool for Environmental Applications

AGUFM, Dec 1, 2017

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Towards precision fishery

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Building the DataBench Workflow and Architecture

Lecture Notes in Computer Science, 2020

In the era of Big Data and AI, it is challenging to know all technical and business advantages of... more In the era of Big Data and AI, it is challenging to know all technical and business advantages of the emerging technologies. The goal of DataBench is to design a benchmarking process helping organizations developing Big Data Technologies (BDT) to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance. This paper focuses on the internals of the DataBench framework and presents our methodological workflow and framework architecture.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Open Data, VGI and Citizen Observatories INSPIRE Hackathon

International Journal of Spatial Data Infrastructures Research, ,, Apr 24, 2018

In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic info... more In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic information and citizen observatories, also known as the INSPIRE Hackathon. The organisers, mostly representatives of European research and innovation projects, continued this activity with the next INSPIRE Conference in 2017. The INSPIRE Hackathon is a collaborative event for developers, researchers, designers and others interested in open data, volunteered geographic information and citizen observatories. The main driving force for the INSPIRE Hackathon is provided by experts from existing EU projects, and its primary objective is to share knowledge and experience between the participants and demonstrate to wider audiences the power of data and information supported by modern technologies and common standards, originating from INSPIRE, Copernicus, GEOSS and other initiatives. This paper describes the history and background of the INSPIRE Hackathon, the various INSPIRE-related hackathons already organised, supporting projects, the results of INSPIRE Hackathon 2017 and the authors’ vision of future activities.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Unified Discovery and Composition of Heterogeneous Services

The MIT Press eBooks, May 1, 2009

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Open Business Model Innovation with the NEFFICS platform and VDML

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data and AI Pipeline Framework: Technology Analysis from a Benchmarking Perspective

Springer eBooks, 2022

Bookmarks Related papers MentionsView impact

Research paper thumbnail of DataBio Deliverable D4.4 – Service Documentation

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Relating Big Data Business and Technical Performance Indicators

The use of big data in organizations involves numerous decisions on the business and technical si... more The use of big data in organizations involves numerous decisions on the business and technical side. While the assessment of technical choices has been studied introducing technical benchmarking approaches, the study of the value of big data and of the impact of business key performance indicators (KPI) on technical choices is still an open problem. The paper discusses a general analysis framework for analyzing big data projects wrt both technical and business performance indicators, and presents the initial results emerging from a first empirical analysis conducted within European companies and research centers within the European DataBench project and the activities of the benchmarking working group of the Big Data Value Association (BDVA). An analysis method is presented, discussing the impact of confidence and support measurements and two directions of analysis are studied: the impact of business KPIs on technical parameters and the study of most important indicators both on the...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data Technologies in DataBio

Big Data in Bioeconomy, 2021

In this introductory chapter, we present the technological background needed for understanding th... more In this introductory chapter, we present the technological background needed for understanding the work in DataBio. We start with basic concepts of Big Data including the main characteristics volume, velocity and variety. Thereafter, we discuss data pipelines and the Big Data Value (BDV) Reference Model that is referred to repeatedly in the book. The layered reference model ranges from data acquisition from sensors up to visualization and user interaction. We then discuss the differences between open and closed data. These differences are important for farmers, foresters and fishermen to understand, when they are considering sharing their professional data. Data sharing is significantly easier, if the data management conforms to the FAIR principles. We end the chapter by describing our DataBio platform that is a software development platform. It is an environment in which a piece of software is developed and improved in an iterative process providing a toolset for services in agricu...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Correction to: Big Data in Bioeconomy

Big Data in Bioeconomy, 2021

The original version of the book was inadvertently published with wrong affiliation of the editor... more The original version of the book was inadvertently published with wrong affiliation of the editor “Tomas Mildorf” in frontmatter. The affiliation has been changed from “Plan4All Horní Bříza, Czech Republic” to “University of West Bohemia, Univerzitni 8, 301 00 Plzen, Czech Republic”.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Environmental data value stream as traceable linked data - Iliad Digital Twin of the Ocean case

In the distributed heterogeneous environmental data ecosystems, the number of data sources, volum... more In the distributed heterogeneous environmental data ecosystems, the number of data sources, volume and variances of derivatives, purposes, formats, and replicas are increasingly growing. In theory, this can enrich the information system as a whole, enabling new data value to be revealed via the combination and fusion of several data sources and data types, searching for further relevant information hidden behind the variety of expressions, formats, replicas, and unknown reliability. It is now visible how complex data alignment is, and even more, it is not always justified due to capacity and business issues. One of the challenging, but also most rewarding approaches is semantic alignment, which promises to fill the information gap of data discovery and joins. To formalise one, an inevitable enabler is an aligned, linked, and machine readable data model enabling the specification of relations between data elements generated information. The Iliad - digital twins of the ocean are case...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of An Agile Model-Based Framework for Service Innovation for the Future Internet

Springer eBooks, 2012

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Model-driven rule-based mediation in XML data exchange

XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating ... more XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating as much as possible the exchange of XML data between enterprise systems is a key requirement for ensuring agile interoperability and scalability in B2B collaborations. The lack of standardized XML canonical models or schemas in B2B data exchange, as well as semantic differences and inconsistencies between conceptual models of those that want to exchange XML data implies that XML data cannot be directly and fully automatically exchanged between B2B systems. We are left with the option of providing techniques and tools to support humans in reconciling the differences and inconsistencies between the data models of the parties involved in a data exchange. In this paper we introduce such a technique and tool for XML data exchange. Our approach is based on a lifting mechanism of XML schemas and instances to an object-oriented model, and the design and execution of data mediation at the object-oriented level. We use F-logic -- an object oriented rule language -- together with its Flora2 engine as the underlying mechanism for providing an abstract, object-oriented model of XML schemas and instances, as well as for specification and execution of the mappings at the model level. This provides us with a fully-fledged tool for design- and run-time data mediation, by focusing at the actual semantic models behind the XML schemas, rather than having to deal with the technicalities of XML in the data mediation process. Finally, we present the architecture of the current data exchange system and report on preliminary evaluation of our system.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Semi-automatic approach Transformation approach for Semantic Interoperability

As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to ... more As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to improve interoperability of enterprise systems at the semantic level. Many approaches in Model-driven Architecture (MDA) and Model-driven Interoperability (MDI) emerge to fulfil the above requirement. However, most of them still demand significant user inputs and provide a low degree of automation, especially when it comes to finding the mappings. A generic approach that can easily handle both semantic interoperability and automatic transformation is currently missing. This paper presents AutoMapping, a semi-automatic model transformation architecture. This approach focuses on two aspects: 1) semi-automatic mapping between data models expressed as class diagrams by involving minimal user interactions at design-time; 2) generation of executable mappings. Particularly at design-time, a semantic engine that solves various kinds of semantic attribute mismatches is devised, such as type, scale, synonym, homonym, granularity, etc. Furthermore, a heuristic-based similarity analysis between each pair of classes is proposed, which takes all relations of classes into account, such as inheritance, reference, etc. Finally, a method is given to match fragments and then generate mappings specification that conforms the proposed mapping metamodel for solving existing semantic mismatches. The main contribution of this paper is to create a generic platform-independent approach for semi- automatic model transformation towards semantic interoperability, with tool-based implementation and motivating case experiment, showing the feasibility of using MDA and MDI techniques for semanti

Bookmarks Related papers MentionsView impact

Research paper thumbnail of SoA-in-Practise: R&D Activities in Norway

Bookmarks Related papers MentionsView impact

Research paper thumbnail of SBVR as a Semantic Hub for Integration of Heterogeneous Systems

Rules and Rule Markup Languages for the Semantic Web, 2013

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Standards and Initiatives for Service Modeling - The Case of OMG SoaML

Service modeling is a key element of any service-oriented system. It is the foundation on which c... more Service modeling is a key element of any service-oriented system. It is the foundation on which core service-related tasks such as service discovery, composition, and mediation rely. During the past years standardization bodies such as W3C, OMG and OASIS have been working on standardizing various aspects of services such as service functionalities, behavior, quality of services, etc. At the same time, initiatives from academia focused on developing ontologies and formal languages for specifying services. In this paper we give a brief overview of relevant initiatives and standardization activities in the area of service modeling, and, as an example of the use of such standards, guide the reader through the use of the OMG Service oriented architecture Modeling Language (SoaML) in a concrete service-oriented scenario in the manufacturing domain.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Open Business Model, Process and Service Innovation with VDML and ServiceML

John Wiley & Sons, Ltd eBooks, Apr 11, 2014

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data in Bioeconomy

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Hackathons as A Capacity Building Tool for Environmental Applications

AGUFM, Dec 1, 2017

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Towards precision fishery

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Building the DataBench Workflow and Architecture

Lecture Notes in Computer Science, 2020

In the era of Big Data and AI, it is challenging to know all technical and business advantages of... more In the era of Big Data and AI, it is challenging to know all technical and business advantages of the emerging technologies. The goal of DataBench is to design a benchmarking process helping organizations developing Big Data Technologies (BDT) to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance. This paper focuses on the internals of the DataBench framework and presents our methodological workflow and framework architecture.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Open Data, VGI and Citizen Observatories INSPIRE Hackathon

International Journal of Spatial Data Infrastructures Research, ,, Apr 24, 2018

In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic info... more In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic information and citizen observatories, also known as the INSPIRE Hackathon. The organisers, mostly representatives of European research and innovation projects, continued this activity with the next INSPIRE Conference in 2017. The INSPIRE Hackathon is a collaborative event for developers, researchers, designers and others interested in open data, volunteered geographic information and citizen observatories. The main driving force for the INSPIRE Hackathon is provided by experts from existing EU projects, and its primary objective is to share knowledge and experience between the participants and demonstrate to wider audiences the power of data and information supported by modern technologies and common standards, originating from INSPIRE, Copernicus, GEOSS and other initiatives. This paper describes the history and background of the INSPIRE Hackathon, the various INSPIRE-related hackathons already organised, supporting projects, the results of INSPIRE Hackathon 2017 and the authors’ vision of future activities.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Unified Discovery and Composition of Heterogeneous Services

The MIT Press eBooks, May 1, 2009

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Open Business Model Innovation with the NEFFICS platform and VDML

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data and AI Pipeline Framework: Technology Analysis from a Benchmarking Perspective

Springer eBooks, 2022

Bookmarks Related papers MentionsView impact

Research paper thumbnail of DataBio Deliverable D4.4 – Service Documentation

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Relating Big Data Business and Technical Performance Indicators

The use of big data in organizations involves numerous decisions on the business and technical si... more The use of big data in organizations involves numerous decisions on the business and technical side. While the assessment of technical choices has been studied introducing technical benchmarking approaches, the study of the value of big data and of the impact of business key performance indicators (KPI) on technical choices is still an open problem. The paper discusses a general analysis framework for analyzing big data projects wrt both technical and business performance indicators, and presents the initial results emerging from a first empirical analysis conducted within European companies and research centers within the European DataBench project and the activities of the benchmarking working group of the Big Data Value Association (BDVA). An analysis method is presented, discussing the impact of confidence and support measurements and two directions of analysis are studied: the impact of business KPIs on technical parameters and the study of most important indicators both on the...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Data Technologies in DataBio

Big Data in Bioeconomy, 2021

In this introductory chapter, we present the technological background needed for understanding th... more In this introductory chapter, we present the technological background needed for understanding the work in DataBio. We start with basic concepts of Big Data including the main characteristics volume, velocity and variety. Thereafter, we discuss data pipelines and the Big Data Value (BDV) Reference Model that is referred to repeatedly in the book. The layered reference model ranges from data acquisition from sensors up to visualization and user interaction. We then discuss the differences between open and closed data. These differences are important for farmers, foresters and fishermen to understand, when they are considering sharing their professional data. Data sharing is significantly easier, if the data management conforms to the FAIR principles. We end the chapter by describing our DataBio platform that is a software development platform. It is an environment in which a piece of software is developed and improved in an iterative process providing a toolset for services in agricu...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Correction to: Big Data in Bioeconomy

Big Data in Bioeconomy, 2021

The original version of the book was inadvertently published with wrong affiliation of the editor... more The original version of the book was inadvertently published with wrong affiliation of the editor “Tomas Mildorf” in frontmatter. The affiliation has been changed from “Plan4All Horní Bříza, Czech Republic” to “University of West Bohemia, Univerzitni 8, 301 00 Plzen, Czech Republic”.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Environmental data value stream as traceable linked data - Iliad Digital Twin of the Ocean case

In the distributed heterogeneous environmental data ecosystems, the number of data sources, volum... more In the distributed heterogeneous environmental data ecosystems, the number of data sources, volume and variances of derivatives, purposes, formats, and replicas are increasingly growing. In theory, this can enrich the information system as a whole, enabling new data value to be revealed via the combination and fusion of several data sources and data types, searching for further relevant information hidden behind the variety of expressions, formats, replicas, and unknown reliability. It is now visible how complex data alignment is, and even more, it is not always justified due to capacity and business issues. One of the challenging, but also most rewarding approaches is semantic alignment, which promises to fill the information gap of data discovery and joins. To formalise one, an inevitable enabler is an aligned, linked, and machine readable data model enabling the specification of relations between data elements generated information. The Iliad - digital twins of the ocean are case...

Bookmarks Related papers MentionsView impact

Research paper thumbnail of An Agile Model-Based Framework for Service Innovation for the Future Internet

Springer eBooks, 2012

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Model-driven rule-based mediation in XML data exchange

XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating ... more XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating as much as possible the exchange of XML data between enterprise systems is a key requirement for ensuring agile interoperability and scalability in B2B collaborations. The lack of standardized XML canonical models or schemas in B2B data exchange, as well as semantic differences and inconsistencies between conceptual models of those that want to exchange XML data implies that XML data cannot be directly and fully automatically exchanged between B2B systems. We are left with the option of providing techniques and tools to support humans in reconciling the differences and inconsistencies between the data models of the parties involved in a data exchange. In this paper we introduce such a technique and tool for XML data exchange. Our approach is based on a lifting mechanism of XML schemas and instances to an object-oriented model, and the design and execution of data mediation at the object-oriented level. We use F-logic -- an object oriented rule language -- together with its Flora2 engine as the underlying mechanism for providing an abstract, object-oriented model of XML schemas and instances, as well as for specification and execution of the mappings at the model level. This provides us with a fully-fledged tool for design- and run-time data mediation, by focusing at the actual semantic models behind the XML schemas, rather than having to deal with the technicalities of XML in the data mediation process. Finally, we present the architecture of the current data exchange system and report on preliminary evaluation of our system.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Semi-automatic approach Transformation approach for Semantic Interoperability

As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to ... more As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to improve interoperability of enterprise systems at the semantic level. Many approaches in Model-driven Architecture (MDA) and Model-driven Interoperability (MDI) emerge to fulfil the above requirement. However, most of them still demand significant user inputs and provide a low degree of automation, especially when it comes to finding the mappings. A generic approach that can easily handle both semantic interoperability and automatic transformation is currently missing. This paper presents AutoMapping, a semi-automatic model transformation architecture. This approach focuses on two aspects: 1) semi-automatic mapping between data models expressed as class diagrams by involving minimal user interactions at design-time; 2) generation of executable mappings. Particularly at design-time, a semantic engine that solves various kinds of semantic attribute mismatches is devised, such as type, scale, synonym, homonym, granularity, etc. Furthermore, a heuristic-based similarity analysis between each pair of classes is proposed, which takes all relations of classes into account, such as inheritance, reference, etc. Finally, a method is given to match fragments and then generate mappings specification that conforms the proposed mapping metamodel for solving existing semantic mismatches. The main contribution of this paper is to create a generic platform-independent approach for semi- automatic model transformation towards semantic interoperability, with tool-based implementation and motivating case experiment, showing the feasibility of using MDA and MDI techniques for semanti

Bookmarks Related papers MentionsView impact

Research paper thumbnail of SoA-in-Practise: R&D Activities in Norway

Bookmarks Related papers MentionsView impact