Elderly suicide rates in Asian and English-speaking countries (original) (raw)

GeneSigDB—a curated database of gene expression signatures

2010

The primary objective of most gene expression studies is the identification of one or more gene signatures; lists of genes whose transcriptional levels are uniquely associated with a specific biological phenotype. Whilst thousands of experimentally derived gene signatures are published, their potential value to the community is limited by their computational inaccessibility. Gene signatures are embedded in published article figures, tables or in supplementary materials, and are frequently presented using non-standard gene or probeset nomenclature. We present GeneSigDB (http:// compbio.dfci.harvard.edu/genesigdb) a manually curated database of gene expression signatures. GeneSigDB release 1.0 focuses on cancer and stem cells gene signatures and was constructed from more than 850 publications from which we manually transcribed 575 gene signatures. Most gene signatures (n = 560) were successfully mapped to the genome to extract standardized lists of EnsEMBL gene identifiers. GeneSigDB provides the original gene signature, the standardized gene list and a fully traceable gene mapping history for each gene from the original transcribed data table through to the standardized list of genes. The GeneSigDB web portal is easy to search, allows users to compare their own gene list to those in the database, and download gene signatures in most common gene identifier formats.

Automating and Simplifying Multiparty Workflows

2018

Any broadcast organization that remains static runs the risk of being overtaken by newer, more agile alternatives. To remain competitive, broadcasters must constantly work to increase process velocity, accuracy, and flexibility. These goals cannot be reached without reducing time to market, manual touch-points, and associated labor costs. A major hurdle on this road to efficiency is the absence of a universal method to identify content, resulting in unnecessary manual workflows and timeand resource-consuming communications with third parties for the production, processing, and exchange of content. Root causes for these impracticalities include problems with work identification during acquisition, reconciliation, and de-duplication of assets obtained from multiple sources; placing high demands on limited resources; and causing delays or reducing content capacity. A necessary element to solve this problem is the use of globally unique and persistent works identification. As such, it w...

IDconverter and IDClight: conversion and annotation of gene and protein IDs

BMC Bioinformatics, 2007

Background: Researchers involved in the annotation of large numbers of gene, clone or protein identifiers are usually required to perform a one-by-one conversion for each identifier. When the field of research is one such as microarray experiments, this number may be around 30,000.

Requirements on unique identifiers for managing product lifecycle information: comparison of alternative approaches

International Journal of …, 2007

Managing product information for product items during their whole lifetime is challenging, especially during their usage and end-of-life phases. The main difficulty is to maintain a communication link between the product item and its associated information as the product item moves over organizational borders and between different users. As network access will typically not be continuous during the whole product-item lifecycle, it is necessary to embed at least a globally unique product identifier (GUPI) that makes it possible to identify the product item anytime during its lifecycle. A GUPI also has to provide a linking mechanism to product information that may be stored in backend systems of different organizations. GUPIs are thereby a cornerstone for enabling the Internet of Things, where 'intelligent products' can communicate over the Internet. In this paper, we analyze and compare the three main currently known approaches for achieving such functionality, i.e. the EPC Network, DIALOG and WWAI.

Userscripts for the Life Sciences

BMC Bioinformatics, 2007

The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources.

ASTROLABE: A Rigorous, Geodetic-Oriented Data Model for Trajectory Determination Systems

ISPRS International Journal of Geo-Information, 2017

The constant irruption of new sensors is a challenge for software systems that do not rely on generic data models able to manage change or innovation. Several data modeling standards exist. Some of these address the problem from a generic perspective but are far too complex for the kind of applications targeted by this work, while others focus strictly on specific kinds of sensors. These approaches pose a problem for the maintainability of software systems dealing with sensor data. This work presents ASTROLABE, a generic and extensible data model specifically devised for trajectory determination systems working with sensors whose error distributions may be fully modeled using means and covariance matrices. A data model relying on four fundamental entities (observation, state, instrument, mathematical model) and related metadata is described; two compliant specifications (for file storage and network communications) are presented; a portable C++ library implementing these specifications is also briefly introduced. ASTROLABE, integrated in CTTC's trajectory determination system NAVEGA, has been extensively used since 2009 in research and production (real-life) projects, coping successfully with a significant variety of sensors. Such experience helped to improve the data model and validate its suitability for the target problem. The authors are considering putting ASTROLABE in the public domain.

Quantifying the interoperability of Open Government datasets

Open Governments use the Web as a global dataspace for datasets. It is in the interest of these governments to be interoperable with other governments worldwide, yet there is currently no way to identify relevant datasets to be interoperable with and there is no way to measure the interoperability itself. In this article we discuss the possibility of comparing identifiers used within various datasets as a way to measure semantic interoperability. We introduce three metrics to express the interoperability between two datasets: the identifier interoperability, the relevance and the number of conflicts. The metrics are calculated from a list of statements which indicate for each pair of identifiers in the system whether they identify the same concept or not. While a lot of effort is needed to collect these statements, the return is high: not only relevant datasets are identified, also machine-readable feedback is provided to the data maintainer.

AbMiner: A bioinformatic resource on available monoclonal antibodies and corresponding gene identifiers for genomic, proteomic, and immunologic studies

BMC Bioinformatics, 2006

Background: Monoclonal antibodies are used extensively throughout the biomedical sciences for detection of antigens, either in vitro or in vivo. We, for example, have used them for quantitation of proteins on "reverse-phase" protein lysate arrays. For those studies, we quality-controlled > 600 available monoclonal antibodies and also needed to develop precise information on the genes that encode their antigens. Translation among the various protein and gene identifier types proved nontrivial because of one-to-many and many-to-one relationships. To organize the antibody, protein, and gene information, we initially developed a relational database in Filemaker for our own use. When it became apparent that the information would be useful to many other researchers faced with the need to choose or characterize antibodies, we developed it further as AbMiner, a fully relational web-based database under MySQL, programmed in Java.