Arne Berre - Profile on Academia.edu (original) (raw)

Papers by Arne Berre

Research paper thumbnail of SBVR as a Semantic Hub for Integration of Heterogeneous Systems

Rules and Rule Markup Languages for the Semantic Web, 2013

Extracting integration rules to handle semantic heterogeneity is one of the main challenges of ac... more Extracting integration rules to handle semantic heterogeneity is one of the main challenges of achieving seamless connectivity between distributed systems. Semantics of Business Vocabulary and Rules (SBVR)'s machine and human readability and platform independence make it potentially suitable and interesting to study, as a central semantic hub of different systems. Semantic heterogeneity can be identified by comparing and analyzing vocabularies, fact models and business rules in the hub. Integration rules can then be extracted based on the semantic heterogeneity analysis. This article investigates and evaluates the usage of SBVR in heterogeneous systems integration. It provides a real-life case study and experience report on extracting integration rules based on an analysis of two Norwegian public sector's heterogeneous IT-systems modeled in SBVR.

Research paper thumbnail of Standards and Initiatives for Service Modeling - The Case of OMG SoaML

Standards and Initiatives for Service Modeling - The Case of OMG SoaML

Service modeling is a key element of any service-oriented system. It is the foundation on which c... more Service modeling is a key element of any service-oriented system. It is the foundation on which core service-related tasks such as service discovery, composition, and mediation rely. During the past years standardization bodies such as W3C, OMG and OASIS have been working on standardizing various aspects of services such as service functionalities, behavior, quality of services, etc. At the same time, initiatives from academia focused on developing ontologies and formal languages for specifying services. In this paper we give a brief overview of relevant initiatives and standardization activities in the area of service modeling, and, as an example of the use of such standards, guide the reader through the use of the OMG Service oriented architecture Modeling Language (SoaML) in a concrete service-oriented scenario in the manufacturing domain.

Research paper thumbnail of Open Business Model, Process and Service Innovation with VDML and ServiceML

John Wiley & Sons, Ltd eBooks, Apr 11, 2014

This paper presents the NEFFICS platform which provides a foundation for cloudbased open business... more This paper presents the NEFFICS platform which provides a foundation for cloudbased open business model innovation, process innovation and service innovation for networked enterprises. Business model innovation is supported with a basis in a business model framework with six views, where each view is supported by a corresponding diagram from the Value Delivery Modeling Language (VDML). Process innovation is supported by VDML activity diagrams with options for mappings to Business Process Model and Notation (BPMN) and Case Management Model and Notation (CMMN). Service innovation is supported by the Service Modelling Language (ServiceML) which shares the core collaboration models of VDML for role modelling and value networks.

Research paper thumbnail of Big Data in Bioeconomy

sharing, adaptation, distribution and reproduction in any medium or format, as long as you give a... more sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

Research paper thumbnail of Hackathons as A Capacity Building Tool for Environmental Applications

Hackathons as A Capacity Building Tool for Environmental Applications

AGUFM, Dec 1, 2017

Research paper thumbnail of Building the DataBench Workflow and Architecture

Building the DataBench Workflow and Architecture

Lecture Notes in Computer Science, 2020

In the era of Big Data and AI, it is challenging to know all technical and business advantages of... more In the era of Big Data and AI, it is challenging to know all technical and business advantages of the emerging technologies. The goal of DataBench is to design a benchmarking process helping organizations developing Big Data Technologies (BDT) to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance. This paper focuses on the internals of the DataBench framework and presents our methodological workflow and framework architecture.

Research paper thumbnail of Open Data, VGI and Citizen Observatories INSPIRE Hackathon

Open Data, VGI and Citizen Observatories INSPIRE Hackathon

International Journal of Spatial Data Infrastructures Research, ,, Apr 24, 2018

In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic info... more In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic information and citizen observatories, also known as the INSPIRE Hackathon. The organisers, mostly representatives of European research and innovation projects, continued this activity with the next INSPIRE Conference in 2017. The INSPIRE Hackathon is a collaborative event for developers, researchers, designers and others interested in open data, volunteered geographic information and citizen observatories. The main driving force for the INSPIRE Hackathon is provided by experts from existing EU projects, and its primary objective is to share knowledge and experience between the participants and demonstrate to wider audiences the power of data and information supported by modern technologies and common standards, originating from INSPIRE, Copernicus, GEOSS and other initiatives. This paper describes the history and background of the INSPIRE Hackathon, the various INSPIRE-related hackathons already organised, supporting projects, the results of INSPIRE Hackathon 2017 and the authors’ vision of future activities.

Research paper thumbnail of Unified Discovery and Composition of Heterogeneous Services

The MIT Press eBooks, May 1, 2009

Service-Oriented Development (SOD) is currently gaining momentum and has been marked as the futur... more Service-Oriented Development (SOD) is currently gaining momentum and has been marked as the future trend in software development. The building blocks of a service-oriented system are services which may be instantiated by web, grid and p2p services. For SOD to prevail there is a need to support the discovery and composition of services. However, this is not an easy task, due to the heterogeneity and incompatibility between the architectural models, protocols, and standards employed by web, grid and p2p services for their description, discovery and composition. In this paper, we present a unified approach for discovering heterogeneous services and composing them in a visual manner. The approach comprises a set of complying languages and their enacting tools.

Research paper thumbnail of Open Business Model Innovation with the NEFFICS platform and VDML

The NEFFICS platform combines an open innovation social media platform with a business modelling ... more The NEFFICS platform combines an open innovation social media platform with a business modelling and operations platform. Business Model Innovation is supported with a basis in a Business Model framework with seven dimensions, where each dimension is supported by a corresponding diagram view from our proposed Value Delivery Modeling Language (VDML). The paper shows the use of the various VDML diagrams illustrated by an example. Business Model innovation can take place through new ideas within all of the dimensions, which can be elaborated and analysed further through a two level diagram approach.

Research paper thumbnail of Big Data and AI Pipeline Framework: Technology Analysis from a Benchmarking Perspective

Springer eBooks, 2022

Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of tec... more Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of technical architectures for Big Data and AI systems. Experiences from many projects in the Big Data PPP program has shown that a number of projects use similar architectural patterns with variations only in the choice of various technology components in the same pattern. The project DataBench has developed a Big Data and AI Pipeline Framework, which is used for the description of pipeline steps in Big Data and AI projects, and supports the classification of benchmarks. This includes the four pipeline steps of Data Acquisition/Collection and Storage, Data Preparation and Curation, Data Analytics with AI/Machine Learning, and Action and Interaction, including Data Visualization and User Interaction as well as API Access. It has also created a toolbox which supports the identification and use of existing benchmarks according to these steps in addition to all of the different technical areas and different data types in the BDV Reference Model. An observatory, which is a tool, accessed via the toolbox, for observing the popularity,

Research paper thumbnail of DataBio Deliverable D4.4 – Service Documentation

The public deliverable D4.4 describes the software components and processes (here called pipeline... more The public deliverable D4.4 describes the software components and processes (here called pipelines as the processes mostly consist of Big Data volumes streaming through successive processing steps) to be utilized by the DataBio Platform and pilots. The pilot services were tested through two phases, Trial 1 and Trial 2 of the project. Most of the components were used in both Trials with some updates in their features for Trial 2. In addition, this deliverable reports which components were deployed in each pilot and the development platform that the pilots tested their Big Data solutions on. The document aggregates information dispersed among various deliverables (namely [REF-01]-[REF-06]). The aim of this deliverable is to create a comprehensive overview of DataBio technical results. The objective of WP4 "DataBio Platform with Pilot Support" was to configure and adopt Big Data technologies for agriculture, forestry, and fishery. The work package together with WP5 "Earth Observation and Geospatial Data and Services", established a platform for the development of bioeconomy applications. The software and dataset repository DataBio Hub is a central resource of the platform. In doing so, WP4 supported the DataBio pilots in their needs for Big Data technologies. This deliverable starts with an overview of DataBio building blocks such as platform architecture, software components, datasets, models that offer functionalities primarily for services in the domains of agriculture, forestry, and fishery. Then follows the exploitation for the identification of cross reusable (sub) pipelines ("design patterns") that can be used across the pilots of the project and can be applied to other domains. The pipelines are one of the major exploitable assets of DataBio.

Research paper thumbnail of Relating Big Data Business and Technical Performance Indicators

The use of big data in organizations involves numerous decisions on the business and technical si... more The use of big data in organizations involves numerous decisions on the business and technical side. While the assessment of technical choices has been studied introducing technical benchmarking approaches, the study of the value of big data and of the impact of business key performance indicators (KPI) on technical choices is still an open problem. The paper discusses a general analysis framework for analyzing big data projects wrt both technical and business performance indicators, and presents the initial results emerging from a first empirical analysis conducted within European companies and research centers within the European DataBench project and the activities of the benchmarking working group of the Big Data Value Association (BDVA). An analysis method is presented, discussing the impact of confidence and support measurements and two directions of analysis are studied: the impact of business KPIs on technical parameters and the study of most important indicators both on the...

Research paper thumbnail of Big Data Technologies in DataBio

Big Data in Bioeconomy, 2021

In this introductory chapter, we present the technological background needed for understanding th... more In this introductory chapter, we present the technological background needed for understanding the work in DataBio. We start with basic concepts of Big Data including the main characteristics volume, velocity and variety. Thereafter, we discuss data pipelines and the Big Data Value (BDV) Reference Model that is referred to repeatedly in the book. The layered reference model ranges from data acquisition from sensors up to visualization and user interaction. We then discuss the differences between open and closed data. These differences are important for farmers, foresters and fishermen to understand, when they are considering sharing their professional data. Data sharing is significantly easier, if the data management conforms to the FAIR principles. We end the chapter by describing our DataBio platform that is a software development platform. It is an environment in which a piece of software is developed and improved in an iterative process providing a toolset for services in agricu...

Research paper thumbnail of Environmental data value stream as traceable linked data - Iliad Digital Twin of the Ocean case

Environmental data value stream as traceable linked data - Iliad Digital Twin of the Ocean case

In the distributed heterogeneous environmental data ecosystems, the number of data sources, volum... more In the distributed heterogeneous environmental data ecosystems, the number of data sources, volume and variances of derivatives, purposes, formats, and replicas are increasingly growing. In theory, this can enrich the information system as a whole, enabling new data value to be revealed via the combination and fusion of several data sources and data types, searching for further relevant information hidden behind the variety of expressions, formats, replicas, and unknown reliability. It is now visible how complex data alignment is, and even more, it is not always justified due to capacity and business issues. One of the challenging, but also most rewarding approaches is semantic alignment, which promises to fill the information gap of data discovery and joins. To formalise one, an inevitable enabler is an aligned, linked, and machine readable data model enabling the specification of relations between data elements generated information. The Iliad - digital twins of the ocean are case...

Research paper thumbnail of An Agile Model-Based Framework for Service Innovation for the Future Internet

Springer eBooks, 2012

Service innovation for the future internet, with service design and service engineering, can bene... more Service innovation for the future internet, with service design and service engineering, can benefit from a combination of an agile and modelbased development approach. An agile approach is focusing on early understanding of user needs and service touchpoints to optimise for the best user experience and rapid adaptation to emerging user needs. A model based approach allows for a combination of various domain specific language that are suited for expressing services on different abstraction levels, aimed at supporting higher productivity and quality in service engineering. This approach presents an agile model-based framework using the emerging OMG domain specific standard languages VDML, BPMN, IFML and SoaML for service design and engineering, relating value models, process models, user interface and interaction flow models, and service architectures and service contract models. The associated methodology and set of practices can be supported by the emerging OMG FACESEM standard (a Foundation for the Agile Creation and Enactment of Software Engineering Methods).

Research paper thumbnail of Model-driven rule-based mediation in XML data exchange

Model-driven rule-based mediation in XML data exchange

XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating ... more XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating as much as possible the exchange of XML data between enterprise systems is a key requirement for ensuring agile interoperability and scalability in B2B collaborations. The lack of standardized XML canonical models or schemas in B2B data exchange, as well as semantic differences and inconsistencies between conceptual models of those that want to exchange XML data implies that XML data cannot be directly and fully automatically exchanged between B2B systems. We are left with the option of providing techniques and tools to support humans in reconciling the differences and inconsistencies between the data models of the parties involved in a data exchange. In this paper we introduce such a technique and tool for XML data exchange. Our approach is based on a lifting mechanism of XML schemas and instances to an object-oriented model, and the design and execution of data mediation at the object-oriented level. We use F-logic -- an object oriented rule language -- together with its Flora2 engine as the underlying mechanism for providing an abstract, object-oriented model of XML schemas and instances, as well as for specification and execution of the mappings at the model level. This provides us with a fully-fledged tool for design- and run-time data mediation, by focusing at the actual semantic models behind the XML schemas, rather than having to deal with the technicalities of XML in the data mediation process. Finally, we present the architecture of the current data exchange system and report on preliminary evaluation of our system.

Research paper thumbnail of A Semi-automatic approach Transformation approach for Semantic Interoperability

A Semi-automatic approach Transformation approach for Semantic Interoperability

As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to ... more As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to improve interoperability of enterprise systems at the semantic level. Many approaches in Model-driven Architecture (MDA) and Model-driven Interoperability (MDI) emerge to fulfil the above requirement. However, most of them still demand significant user inputs and provide a low degree of automation, especially when it comes to finding the mappings. A generic approach that can easily handle both semantic interoperability and automatic transformation is currently missing. This paper presents AutoMapping, a semi-automatic model transformation architecture. This approach focuses on two aspects: 1) semi-automatic mapping between data models expressed as class diagrams by involving minimal user interactions at design-time; 2) generation of executable mappings. Particularly at design-time, a semantic engine that solves various kinds of semantic attribute mismatches is devised, such as type, scale, synonym, homonym, granularity, etc. Furthermore, a heuristic-based similarity analysis between each pair of classes is proposed, which takes all relations of classes into account, such as inheritance, reference, etc. Finally, a method is given to match fragments and then generate mappings specification that conforms the proposed mapping metamodel for solving existing semantic mismatches. The main contribution of this paper is to create a generic platform-independent approach for semi- automatic model transformation towards semantic interoperability, with tool-based implementation and motivating case experiment, showing the feasibility of using MDA and MDI techniques for semanti

Research paper thumbnail of SoA-in-Practise: R&D Activities in Norway

Research paper thumbnail of Supporting Sustainable Mobility Using Mobile Technologies and Personalized Environmental Information: The Citi-Sense-MOB Approach in Oslo, Norway

Supporting Sustainable Mobility Using Mobile Technologies and Personalized Environmental Information: The Citi-Sense-MOB Approach in Oslo, Norway

Progress in IS, 2016

Urban and peri-urban growth is increasing worldwide and Europe is now one of the most urbanized c... more Urban and peri-urban growth is increasing worldwide and Europe is now one of the most urbanized continents in the world. Oslo is one of the fastest growing cities in Europe. This creates pressure on its infrastructure, including traffic and environmental urban quality. Additionally, vehicular traffic is a major contributor to CO2 emissions, which affects climate change. It is recognized that air quality is a major factor for human health, however, although different measures have been implemented, improving air quality and lowering carbon emissions still remains an unsolved problem in Oslo.

Research paper thumbnail of Organizational Interoperability Supported through Goal Alignment with BMM and Service Collaboration with SoaML

Organisational interoperability requires a matching of goals and provided and required services b... more Organisational interoperability requires a matching of goals and provided and required services between interacting organizations. It is being shown that the recent OMG standards BMM and SoaML can be used by business people to reach this agreement, as well as serve as a foundation for describing the realization of interoperability through cross organisational business processes and further realization of IT support with semantic and technical service interoperability. We illustrate the approach with an example of service identification from the Norwegian national Health ICT architecture.

Research paper thumbnail of SBVR as a Semantic Hub for Integration of Heterogeneous Systems

Rules and Rule Markup Languages for the Semantic Web, 2013

Extracting integration rules to handle semantic heterogeneity is one of the main challenges of ac... more Extracting integration rules to handle semantic heterogeneity is one of the main challenges of achieving seamless connectivity between distributed systems. Semantics of Business Vocabulary and Rules (SBVR)'s machine and human readability and platform independence make it potentially suitable and interesting to study, as a central semantic hub of different systems. Semantic heterogeneity can be identified by comparing and analyzing vocabularies, fact models and business rules in the hub. Integration rules can then be extracted based on the semantic heterogeneity analysis. This article investigates and evaluates the usage of SBVR in heterogeneous systems integration. It provides a real-life case study and experience report on extracting integration rules based on an analysis of two Norwegian public sector's heterogeneous IT-systems modeled in SBVR.

Research paper thumbnail of Standards and Initiatives for Service Modeling - The Case of OMG SoaML

Standards and Initiatives for Service Modeling - The Case of OMG SoaML

Service modeling is a key element of any service-oriented system. It is the foundation on which c... more Service modeling is a key element of any service-oriented system. It is the foundation on which core service-related tasks such as service discovery, composition, and mediation rely. During the past years standardization bodies such as W3C, OMG and OASIS have been working on standardizing various aspects of services such as service functionalities, behavior, quality of services, etc. At the same time, initiatives from academia focused on developing ontologies and formal languages for specifying services. In this paper we give a brief overview of relevant initiatives and standardization activities in the area of service modeling, and, as an example of the use of such standards, guide the reader through the use of the OMG Service oriented architecture Modeling Language (SoaML) in a concrete service-oriented scenario in the manufacturing domain.

Research paper thumbnail of Open Business Model, Process and Service Innovation with VDML and ServiceML

John Wiley & Sons, Ltd eBooks, Apr 11, 2014

This paper presents the NEFFICS platform which provides a foundation for cloudbased open business... more This paper presents the NEFFICS platform which provides a foundation for cloudbased open business model innovation, process innovation and service innovation for networked enterprises. Business model innovation is supported with a basis in a business model framework with six views, where each view is supported by a corresponding diagram from the Value Delivery Modeling Language (VDML). Process innovation is supported by VDML activity diagrams with options for mappings to Business Process Model and Notation (BPMN) and Case Management Model and Notation (CMMN). Service innovation is supported by the Service Modelling Language (ServiceML) which shares the core collaboration models of VDML for role modelling and value networks.

Research paper thumbnail of Big Data in Bioeconomy

sharing, adaptation, distribution and reproduction in any medium or format, as long as you give a... more sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

Research paper thumbnail of Hackathons as A Capacity Building Tool for Environmental Applications

Hackathons as A Capacity Building Tool for Environmental Applications

AGUFM, Dec 1, 2017

Research paper thumbnail of Building the DataBench Workflow and Architecture

Building the DataBench Workflow and Architecture

Lecture Notes in Computer Science, 2020

In the era of Big Data and AI, it is challenging to know all technical and business advantages of... more In the era of Big Data and AI, it is challenging to know all technical and business advantages of the emerging technologies. The goal of DataBench is to design a benchmarking process helping organizations developing Big Data Technologies (BDT) to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance. This paper focuses on the internals of the DataBench framework and presents our methodological workflow and framework architecture.

Research paper thumbnail of Open Data, VGI and Citizen Observatories INSPIRE Hackathon

Open Data, VGI and Citizen Observatories INSPIRE Hackathon

International Journal of Spatial Data Infrastructures Research, ,, Apr 24, 2018

In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic info... more In 2016, the INSPIRE Conference hosted the first INSPIRE hackathon on volunteered geographic information and citizen observatories, also known as the INSPIRE Hackathon. The organisers, mostly representatives of European research and innovation projects, continued this activity with the next INSPIRE Conference in 2017. The INSPIRE Hackathon is a collaborative event for developers, researchers, designers and others interested in open data, volunteered geographic information and citizen observatories. The main driving force for the INSPIRE Hackathon is provided by experts from existing EU projects, and its primary objective is to share knowledge and experience between the participants and demonstrate to wider audiences the power of data and information supported by modern technologies and common standards, originating from INSPIRE, Copernicus, GEOSS and other initiatives. This paper describes the history and background of the INSPIRE Hackathon, the various INSPIRE-related hackathons already organised, supporting projects, the results of INSPIRE Hackathon 2017 and the authors’ vision of future activities.

Research paper thumbnail of Unified Discovery and Composition of Heterogeneous Services

The MIT Press eBooks, May 1, 2009

Service-Oriented Development (SOD) is currently gaining momentum and has been marked as the futur... more Service-Oriented Development (SOD) is currently gaining momentum and has been marked as the future trend in software development. The building blocks of a service-oriented system are services which may be instantiated by web, grid and p2p services. For SOD to prevail there is a need to support the discovery and composition of services. However, this is not an easy task, due to the heterogeneity and incompatibility between the architectural models, protocols, and standards employed by web, grid and p2p services for their description, discovery and composition. In this paper, we present a unified approach for discovering heterogeneous services and composing them in a visual manner. The approach comprises a set of complying languages and their enacting tools.

Research paper thumbnail of Open Business Model Innovation with the NEFFICS platform and VDML

The NEFFICS platform combines an open innovation social media platform with a business modelling ... more The NEFFICS platform combines an open innovation social media platform with a business modelling and operations platform. Business Model Innovation is supported with a basis in a Business Model framework with seven dimensions, where each dimension is supported by a corresponding diagram view from our proposed Value Delivery Modeling Language (VDML). The paper shows the use of the various VDML diagrams illustrated by an example. Business Model innovation can take place through new ideas within all of the dimensions, which can be elaborated and analysed further through a two level diagram approach.

Research paper thumbnail of Big Data and AI Pipeline Framework: Technology Analysis from a Benchmarking Perspective

Springer eBooks, 2022

Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of tec... more Big Data and AI Pipeline patterns provide a good foundation for the analysis and selection of technical architectures for Big Data and AI systems. Experiences from many projects in the Big Data PPP program has shown that a number of projects use similar architectural patterns with variations only in the choice of various technology components in the same pattern. The project DataBench has developed a Big Data and AI Pipeline Framework, which is used for the description of pipeline steps in Big Data and AI projects, and supports the classification of benchmarks. This includes the four pipeline steps of Data Acquisition/Collection and Storage, Data Preparation and Curation, Data Analytics with AI/Machine Learning, and Action and Interaction, including Data Visualization and User Interaction as well as API Access. It has also created a toolbox which supports the identification and use of existing benchmarks according to these steps in addition to all of the different technical areas and different data types in the BDV Reference Model. An observatory, which is a tool, accessed via the toolbox, for observing the popularity,

Research paper thumbnail of DataBio Deliverable D4.4 – Service Documentation

The public deliverable D4.4 describes the software components and processes (here called pipeline... more The public deliverable D4.4 describes the software components and processes (here called pipelines as the processes mostly consist of Big Data volumes streaming through successive processing steps) to be utilized by the DataBio Platform and pilots. The pilot services were tested through two phases, Trial 1 and Trial 2 of the project. Most of the components were used in both Trials with some updates in their features for Trial 2. In addition, this deliverable reports which components were deployed in each pilot and the development platform that the pilots tested their Big Data solutions on. The document aggregates information dispersed among various deliverables (namely [REF-01]-[REF-06]). The aim of this deliverable is to create a comprehensive overview of DataBio technical results. The objective of WP4 "DataBio Platform with Pilot Support" was to configure and adopt Big Data technologies for agriculture, forestry, and fishery. The work package together with WP5 "Earth Observation and Geospatial Data and Services", established a platform for the development of bioeconomy applications. The software and dataset repository DataBio Hub is a central resource of the platform. In doing so, WP4 supported the DataBio pilots in their needs for Big Data technologies. This deliverable starts with an overview of DataBio building blocks such as platform architecture, software components, datasets, models that offer functionalities primarily for services in the domains of agriculture, forestry, and fishery. Then follows the exploitation for the identification of cross reusable (sub) pipelines ("design patterns") that can be used across the pilots of the project and can be applied to other domains. The pipelines are one of the major exploitable assets of DataBio.

Research paper thumbnail of Relating Big Data Business and Technical Performance Indicators

The use of big data in organizations involves numerous decisions on the business and technical si... more The use of big data in organizations involves numerous decisions on the business and technical side. While the assessment of technical choices has been studied introducing technical benchmarking approaches, the study of the value of big data and of the impact of business key performance indicators (KPI) on technical choices is still an open problem. The paper discusses a general analysis framework for analyzing big data projects wrt both technical and business performance indicators, and presents the initial results emerging from a first empirical analysis conducted within European companies and research centers within the European DataBench project and the activities of the benchmarking working group of the Big Data Value Association (BDVA). An analysis method is presented, discussing the impact of confidence and support measurements and two directions of analysis are studied: the impact of business KPIs on technical parameters and the study of most important indicators both on the...

Research paper thumbnail of Big Data Technologies in DataBio

Big Data in Bioeconomy, 2021

In this introductory chapter, we present the technological background needed for understanding th... more In this introductory chapter, we present the technological background needed for understanding the work in DataBio. We start with basic concepts of Big Data including the main characteristics volume, velocity and variety. Thereafter, we discuss data pipelines and the Big Data Value (BDV) Reference Model that is referred to repeatedly in the book. The layered reference model ranges from data acquisition from sensors up to visualization and user interaction. We then discuss the differences between open and closed data. These differences are important for farmers, foresters and fishermen to understand, when they are considering sharing their professional data. Data sharing is significantly easier, if the data management conforms to the FAIR principles. We end the chapter by describing our DataBio platform that is a software development platform. It is an environment in which a piece of software is developed and improved in an iterative process providing a toolset for services in agricu...

Research paper thumbnail of Environmental data value stream as traceable linked data - Iliad Digital Twin of the Ocean case

Environmental data value stream as traceable linked data - Iliad Digital Twin of the Ocean case

In the distributed heterogeneous environmental data ecosystems, the number of data sources, volum... more In the distributed heterogeneous environmental data ecosystems, the number of data sources, volume and variances of derivatives, purposes, formats, and replicas are increasingly growing. In theory, this can enrich the information system as a whole, enabling new data value to be revealed via the combination and fusion of several data sources and data types, searching for further relevant information hidden behind the variety of expressions, formats, replicas, and unknown reliability. It is now visible how complex data alignment is, and even more, it is not always justified due to capacity and business issues. One of the challenging, but also most rewarding approaches is semantic alignment, which promises to fill the information gap of data discovery and joins. To formalise one, an inevitable enabler is an aligned, linked, and machine readable data model enabling the specification of relations between data elements generated information. The Iliad - digital twins of the ocean are case...

Research paper thumbnail of An Agile Model-Based Framework for Service Innovation for the Future Internet

Springer eBooks, 2012

Service innovation for the future internet, with service design and service engineering, can bene... more Service innovation for the future internet, with service design and service engineering, can benefit from a combination of an agile and modelbased development approach. An agile approach is focusing on early understanding of user needs and service touchpoints to optimise for the best user experience and rapid adaptation to emerging user needs. A model based approach allows for a combination of various domain specific language that are suited for expressing services on different abstraction levels, aimed at supporting higher productivity and quality in service engineering. This approach presents an agile model-based framework using the emerging OMG domain specific standard languages VDML, BPMN, IFML and SoaML for service design and engineering, relating value models, process models, user interface and interaction flow models, and service architectures and service contract models. The associated methodology and set of practices can be supported by the emerging OMG FACESEM standard (a Foundation for the Agile Creation and Enactment of Software Engineering Methods).

Research paper thumbnail of Model-driven rule-based mediation in XML data exchange

Model-driven rule-based mediation in XML data exchange

XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating ... more XML data exchange has become ubiquitous in Business to Business (B2B) collaborations. Automating as much as possible the exchange of XML data between enterprise systems is a key requirement for ensuring agile interoperability and scalability in B2B collaborations. The lack of standardized XML canonical models or schemas in B2B data exchange, as well as semantic differences and inconsistencies between conceptual models of those that want to exchange XML data implies that XML data cannot be directly and fully automatically exchanged between B2B systems. We are left with the option of providing techniques and tools to support humans in reconciling the differences and inconsistencies between the data models of the parties involved in a data exchange. In this paper we introduce such a technique and tool for XML data exchange. Our approach is based on a lifting mechanism of XML schemas and instances to an object-oriented model, and the design and execution of data mediation at the object-oriented level. We use F-logic -- an object oriented rule language -- together with its Flora2 engine as the underlying mechanism for providing an abstract, object-oriented model of XML schemas and instances, as well as for specification and execution of the mappings at the model level. This provides us with a fully-fledged tool for design- and run-time data mediation, by focusing at the actual semantic models behind the XML schemas, rather than having to deal with the technicalities of XML in the data mediation process. Finally, we present the architecture of the current data exchange system and report on preliminary evaluation of our system.

Research paper thumbnail of A Semi-automatic approach Transformation approach for Semantic Interoperability

A Semi-automatic approach Transformation approach for Semantic Interoperability

As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to ... more As data exchange and model transformation become ubiquitous nowadays, it is a key requirement to improve interoperability of enterprise systems at the semantic level. Many approaches in Model-driven Architecture (MDA) and Model-driven Interoperability (MDI) emerge to fulfil the above requirement. However, most of them still demand significant user inputs and provide a low degree of automation, especially when it comes to finding the mappings. A generic approach that can easily handle both semantic interoperability and automatic transformation is currently missing. This paper presents AutoMapping, a semi-automatic model transformation architecture. This approach focuses on two aspects: 1) semi-automatic mapping between data models expressed as class diagrams by involving minimal user interactions at design-time; 2) generation of executable mappings. Particularly at design-time, a semantic engine that solves various kinds of semantic attribute mismatches is devised, such as type, scale, synonym, homonym, granularity, etc. Furthermore, a heuristic-based similarity analysis between each pair of classes is proposed, which takes all relations of classes into account, such as inheritance, reference, etc. Finally, a method is given to match fragments and then generate mappings specification that conforms the proposed mapping metamodel for solving existing semantic mismatches. The main contribution of this paper is to create a generic platform-independent approach for semi- automatic model transformation towards semantic interoperability, with tool-based implementation and motivating case experiment, showing the feasibility of using MDA and MDI techniques for semanti

Research paper thumbnail of SoA-in-Practise: R&D Activities in Norway

Research paper thumbnail of Supporting Sustainable Mobility Using Mobile Technologies and Personalized Environmental Information: The Citi-Sense-MOB Approach in Oslo, Norway

Supporting Sustainable Mobility Using Mobile Technologies and Personalized Environmental Information: The Citi-Sense-MOB Approach in Oslo, Norway

Progress in IS, 2016

Urban and peri-urban growth is increasing worldwide and Europe is now one of the most urbanized c... more Urban and peri-urban growth is increasing worldwide and Europe is now one of the most urbanized continents in the world. Oslo is one of the fastest growing cities in Europe. This creates pressure on its infrastructure, including traffic and environmental urban quality. Additionally, vehicular traffic is a major contributor to CO2 emissions, which affects climate change. It is recognized that air quality is a major factor for human health, however, although different measures have been implemented, improving air quality and lowering carbon emissions still remains an unsolved problem in Oslo.

Research paper thumbnail of Organizational Interoperability Supported through Goal Alignment with BMM and Service Collaboration with SoaML

Organisational interoperability requires a matching of goals and provided and required services b... more Organisational interoperability requires a matching of goals and provided and required services between interacting organizations. It is being shown that the recent OMG standards BMM and SoaML can be used by business people to reach this agreement, as well as serve as a foundation for describing the realization of interoperability through cross organisational business processes and further realization of IT support with semantic and technical service interoperability. We illustrate the approach with an example of service identification from the Norwegian national Health ICT architecture.