Assigning creative commons licenses to research metadata (original) (raw)
Related papers
Assigning Creative Commons Licenses to Research Metadata: Issues and Cases
Lecture Notes in Computer Science, 2018
This paper discusses the problem of lack of clear licensing and transparency of usage terms and conditions for research metadata. Making research data connected, discoverable and reusable are the key enablers of the new data revolution in research. We discuss how the lack of transparency hinders discovery of research data and make it disconnected from the publication and other trusted research outcomes. In addition, we discuss the application of Creative Commons licenses for research metadata, and provide some examples of the applicability of this approach to internationally known data infrastructures.
Uncommon Commons? Creative Commons Licencing in Horizon 2020 Data Management Plans
International Journal of Digital Curation
As policies, good practices and mandates on research data management evolve, more emphasis has been put on the licencing of data, which allows potential re-users to quickly identify what they can do with the data in question. In this paper I analyse a pre-existing collection of 840 Horizon 2020 public data management plans (DMPs) to determine which ones mention creative commons licences and among those who do, which licences are being used. I find that 36% of DMPs mention creative commons and among those a number of different approaches towards licencing exist (overall policy per project, licencing decisions per dataset, licencing decisions per partner, licensing decision per data format, licensing decision per perceived stakeholder interest), often clad in rather vague language with CC licences being “recommended” or “suggested”. Some DMPs also “kick the can further down the road” by mentioning that “a” CC licence will be used, but not which one. However, among those DMPs that do m...
2017
It is increasingly common for researchers to make their data freely available. This is often a requirement of funding agencies but also consistent with the principles of open science, according to which all research data should be shared and made available for reuse. Once data is reused, the researchers who have provided access to it should be acknowledged for their contributions, much as authors are recognised for their publications through citation. Hyoungjoo Park and Dietmar Wolfram have studied characteristics of data sharing, reuse, and citation and found that current data citation practices do not yet benefit data sharers, with little or no consistency in their format. More formalised citation practices might encourage more authors to make their data available for reuse.
An Open, FAIRified Data Commons: Proposal for NIH Data Commons Pilot
2017
This proposal is a response to NIH's call for creation of a Data Commons (RM-17-026). The Commons must support use cases of many stakeholders who need access to scholarly process, content, and outcomes in pursuit of knowledge. Moreover, the Commons must be flexible enough to respect researchers’ idiosyncratic workflows, yet specific enough to solve problems that researchers are trying to solve. To meet both demands, a successful Commons will provide core services that are shared across workflows, and flexible interfaces that meet the individual needs of stakeholders. By leveraging existing open tools, an expansive community network, and in-depth expertise, this collaborative team is well positioned to contribute to the Data Commons pilot and beyond.
Data Science Journal
Investments in research that produce scientific and scholarly data can be leveraged by enabling the resulting research data products and services to be used by broader communities and for new purposes, extending reuse beyond the initial users and purposes for which the data were originally collected. Submitting research data to a data repository offers opportunities for the data to be used in the future, providing ways for new benefits to be realized from data reuse. Improvements to data repositories that facilitate new uses of data increase the potential for data reuse and for gains in the value of open data products and services that are associated with such reuse. Assessing and certifying the capabilities and services offered by data repositories provides opportunities for improving the repositories and for realizing the value to be attained from new uses of data. The evolution of data repository certification instruments is described and discussed in terms of the implications for the curation and continuing use of research data.
The FAIR Guiding Principles for scientific data management and stewardship
Scientific Data, 2016
There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community. Supporting discovery through good data management Good data management is not a goal in itself, but rather is the key conduit leading to knowledge discovery and innovation, and to subsequent data and knowledge integration and reuse by the community after the data publication process. Unfortunately, the existing digital ecosystem surrounding scholarly data publication prevents us from extracting maximum benefit from our research investments (e.g., ref. 1). Partially in response to this, science funders, publishers and governmental agencies are beginning to require data management and stewardship plans for data generated in publicly funded experiments. Beyond proper collection, annotation, and archival, data stewardship includes the notion of 'long-term care' of valuable digital assets, with the goal that they should be discovered and re-used for downstream investigations, either alone, or in combination with newly generated data. The outcomes from good data management and stewardship, therefore, are high quality digital publications that facilitate and simplify this ongoing process of discovery, evaluation, and reuse in downstream studies. What constitutes 'good data management' is, however, largely undefined, and is generally left as a decision for the data or repository owner. Therefore, bringing some clarity around the goals and desiderata of good data management and stewardship, and defining simple guideposts to inform those who publish and/or preserve scholarly data, would be of great utility. This article describes four foundational principles-Findability, Accessibility, Interoperability, and Reusability-that serve to guide data producers and publishers as they navigate around these obstacles, thereby helping to maximize the added-value gained by contemporary, formal scholarly digital publishing. Importantly, it is our intent that the principles apply not only to 'data' in the conventional sense, but also to the algorithms, tools, and workflows that led to that data. All scholarly digital research objects 2-from data to analytical pipelines-benefit from application of these principles, since all components of the research process must be available to ensure transparency, reproducibility, and reusability. There are numerous and diverse stakeholders who stand to benefit from overcoming these obstacles: researchers wanting to share, get credit, and reuse each other's data and interpretations; professional data publishers offering their services; software and tool-builders providing data analysis and processing services such as reusable workflows; funding agencies (private and public) increasingly Correspondence and requests for materials should be addressed to B.M.
From Free Culture to Open Data: Technical Requirements for Access and Authorship
2010
Abstract. Creative Commons tools makes it easier for users, who are also authors, to share, locate and distribute reusable content, fostering remix and digital creativity, open science and freedom of expression. But reuse could be made even easier by the licensing framework, which does not yet handle the diversity of legal and usage situations pertaining to technical accessibility and reuse modalities of works and data.
MetaDL: A digital library of metadata for sensitive or complex research data
2002
Traditional digital library systems have difficulties when managing heterogeneous datasets that have limitations on their distribution. Collections of digital libraries have to be accessed individually and through non-uniform interfaces. By introducing a level of abstraction, a Meta-Digital Library or MetaDL, users gain a central access portal that allows for prioritized queries, evaluation and rating of the results, and secure transactions to obtain primary data. This paper demonstrates the MetaDL architecture with an application from human brain neuroimaging research, BrassDL, the Brain Support Access System Digital Library. This is the first such system that covers all aspects of a digital library for sensitive and complex human brain data, from secure acquisition and access, user to user system-supported transactions, to legal, ethical and sustainability issues.
Metadata for Research Data: Current Practices and Trends
International Conference on Dublin Core and Metadata Applications, 2014
This paper reports a study that examined the metadata standards and formats used by a select number of research data services, namely Datacite, Dataverse Network, Dryad, and FigShare. These services make use of a broad range of metadata practices and elements. The specific objective of the study was to investigate the number and nature of metadata elements, metadata elements specific to research data, compliance with interoperability and preservation standards, the use of controlled vocabularies for subject description and access and the extent of support for unique identifiers as well as the common and different metadata elements across these services. The study found that there was a variety of metadata elements used by the research data services and that the use of controlled vocabularies was common across the services. It was found that preservation and unique identifiers are central components of the studied services. An interesting observation was the extent of research data specific metadata elements, with Dryad making use of a wider range of metadata elements specific to research data than other services.