BioMedScape: a Web based Environment for Diffusion and Analysis of Research Data in Discrete Areas of Life-Sciences (original) (raw)

Biomedical Web, Collections and Meta-Analysis Literature Applications

Biomedical Engineering - From Theory to Applications, 2011

Cause and effect of the digital revolution is the production of a lot and different kinds of web tools, applications and resources that permit optimization the retrieve, management and analysis of biomedical bibliography. The information revolution is a cause and effect of scientific and technological progress of the twentieth century, amount of information that is now produced on different scientific topics is huge plus: It can be electronic or printed, there is text, images and sounds is systematized in databases data, catalogs or lists, your query can be free or restricted, is on life or their parts, phenomena and explanations, cover publications, researchers, projects, groups and research lines, agreements, grants, scientific, institutions research and teaching, biological collections, educational institutions and societies science, to name a few. Refer to information in the twenty-first century involves the mention of terms, methods, novel and innovative theories as knowledge society, information society, globalization, info diversity, access to information, e-science, e-research, grid, collaboratories, repositories, knowledge based on literature, text mining, semantic web, impact index, cocitation, web 2.0 and 3.0, social networking, plagiarism, and free access. Those changes have been dramatically impacted the contemporary world view, scientific practice and scientific relations, social, economic, political and cultural (Russell, 2001). Scientific society generates and receives information, it is exposed to it as a representation of thought and knowledge in all cases creates a conscious or unconscious interest transmits individually or collectively. The scientific communities recognize the value of the information, required it as a condition to perform fundamental research. Published information on biology and medicine is not exception, the quantity, diversity and complexity of digital information are so many and so different, some electronic resources through which you can access it are not simple, which has made it necessary to be informed and update on the continuing emergence and modification of these tools, while it has become a problem to solve: continuously published magazines in a large number of items. Recover strategies and analysis of information on the specific area of interest of researchers and design programs and websites constantly to achieve this (Larson, 2010a). Electronic resources with biomedical literature can be consulted electronically Internet allows instant access to digital data collections updated with information generated by the www.intechopen.com Biomedical Engineering-From Theory to Applications 2 specialists (Faciola, 2009). The power of the new electronic technologies has increased exponential, we have designed a lot of applications that allow you to group, sort and display documents which have reduced power, cost and time required to analyze literature specialized (Hey & Trefethen, 2005). Not only that, in less than ten years has changed the practice of science, is no longer explores the reality only through experiments and models in vivo and / or in vitro but made in silico tools and computational methods (Atkins et al., 2003). This phenomenon has affected both the way we produce scientific knowledge that have developed new fields of knowledge practiced by specialists, such as bioinformatics, medical informatics, biological informatics, neuroinformatics, and literature-based discovery, among others. The change has been important even in the way recovered and analyzed the literature so much that you have proposed new ways to access the information to put aside the reductionist approach and adopt a system according to the progress of own biological discipline. The search, access, analysis and updating of the literature in databases has become a daily task. It is usually necessary to consult several indexes to have more complete representation of the literature on the topic of interest (Zhou et al., 2006). But such is the quantity and diversity of papers on biomedicine, there are so many, so different and complex electronic resources (especially bibliographic databases) through which you can access that information, not just that, but change, progress and constantly updated, it is difficult to keep track of them all and identify which and how many can and should use.

NeuroScholar\'s Electronic Laboratory Notebook and Its Application to Neuroendocrinology

Neuroinformatics, 2006

Scientists continually relate information from the published literature to their current research. The challenge of this essential and time-consuming activity increases as the body of scientific literature continues to grow. In an attempt to lessen the challenge, we have developed an Electronic Laboratory Notebook (ELN) application. Our ELN functions as a component of another application we have developed, an open-source knowledge management system for the neuroscientific literature called NeuroScholar (http://www.neuroscholar.org/). Scanned notebook pages, images, and data files are entered into the ELN, where they can be annotated, organized, and linked to similarly annotated excerpts from the published literature within Neuroscholar. Associations between these knowledge constructs are created within a dynamic node-and-edge user interface. To produce an interactive, adaptable knowledge base. We demonstrate the ELN's utility by using it to organize data and literature related to our studies of the neuroendocrine hypothalamic paraventricular nucleus (PVH). We also discuss how the ELN could be applied to model other neuroendocrine systems; as an example we look at the role of PVH stressorresponsive neurons in the context of their involvement in the suppression of reproductive function. We present this application to the community as open-source software and invite contributions to its development.

NeuroHub: A Research Information Environment for Neuroscientists

Abstract—The NeuroHub project aims to develop a research information system for neuroscientists at three different partner institutions: Oxford, Reading and Southampton. Each research group has different working practices, research methodologies and user requirements, which have lead to the development of a system that supports a wide variety of tasks in the neuroscience research life cycle.

Information Infrastructure for Cooperative Research in Neuroscience

Computational Intelligence and Neuroscience, 2009

The paper describes a framework for efficient sharing of knowledge between research groups, which have been working for several years without flaws. The obstacles in cooperation are connected primarily with the lack of platforms for effective exchange of experimental data, models, and algorithms. The solution to these problems is proposed by construction of the platform (EEG.pl) with the semantic aware search scheme between portals. The above approach implanted in the international cooperative projects like NEUROMATH may bring the significant progress in designing efficient methods for neuroscience research.

The Internet for the medical and scientific community

Molecular human reproduction, 1998

With the explosion of data coming out from the international scientific community, researchers can now locate and manipulate data of interest quickly and easily on the Internet. The scope of this review is to focus on some of the recent developments of the Internet that are specially relevant to health scientists. The review also provides the medical and scientific community with a selection of sites to visit on the Internet, dealing with clinical and laboratory aspects of science.

Development and application of CMS-based database modules for neuroinformatics

Neurocomputing, 2007

In order to utilize the accumulation of expertise and research effectively, it is important to integrate various resources, such as bibliography and experimental data, from individual laboratories to international levels. The sharing of research resources and the integration of knowledge are absolutely imperative for future development in both experimental and computational neuroscience fields. On the other hand, content management systems (CMS) have become widespread for constructing and managing WWW portal sites. In this study, in order to construct an effective resource-managing environment in the laboratory, we develop two database modules based on CMS, which can provide data integration and sharing capabilities for bibliographical resources and archived data files. We show the effectiveness of these modules in the biological and neuroscience fields by applying them in our laboratory-based work. Furthermore, similar applications can be made in the construction of Internet portal sites, because of considerations for security.

A Digital Repository and Execution Platform for Interactive Scholarly Publications in Neuroscience

Neuroinformatics, 2015

The CARMEN Virtual Laboratory (VL) is a cloud-based platform which allows neuroscientists to store, share, develop, execute, reproduce and publicise their work. This paper describes new functionality in the CARMEN VL: an interactive publications repository. This new facility allows users to link data and software to publications. This enables other users to examine data and software associated with the publication and execute the associated software within the VL using the same data as the authors used in the publication. The cloud-based architecture and SaaS (Software as a Service) framework allows vast data sets to be uploaded and analysed using software services. Thus, this new interactive publications facility allows others to build on research results through reuse. This aligns with recent developments by funding agencies, institutions, and publishers with a move to open access research. Open access provides reproducibility and verification of research resources and results. Publications and their associated data and software will be assured of long-term preservation and curation in the repository. Further, analysing research data and the evaluations described in publications frequently requires a number of execution stages many of which are iterative. The VL provides a scientific workflow environment to combine software services into a processing tree. These workflows can also be associated with publications and executed by users. The VL also provides a secure environment where users can decide the access rights for each resource to ensure copyright and privacy restrictions are met.

The neuroscience information framework: a data and knowledge environment for neuroscience

Neuroinformatics, 2008

With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience's Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultantcollaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line.

Role of Internet Images in the Biomedical Informatics Research Network

Internet Imaging IV, 2003

The Biomedical Informatics Research Network is wide breadth project sponsored by the American National Institutes of Health (NIH) to promote the use of modern telecommunication for data exchange and collaboration in brain research. The project is attempting to buid a database and network infrastructure in which neuroscientists will post, query, and analyze raw data, processed data, and the results of the analysis. The project is divided into parts, which analyze mouse brain data and human brain data, respectively. In this phase of the project, the data are essentially anatomical, while in a future phase we foresee the introduction of functional data. One important source of raw data, both for the mouse and the human brains are magnetic resonance images (MRI), which provide dense volumetric information of the density of the brain or (in the case of functional MRI), of the brain activity. In the case of the brain mouse, these data are supplemented with images of slices of brains and other histological measure. One important technical problem that we are facing in BIRN is that of managing these volumetric data, processing them (possibly using tools available only remotely), storing the results of the analyses, and making them available to all the institutions participating in the project. This paper describes the problems posed by the BIRN project, the importance of image data in these activities, and the challenges they pose. We will describe the shared environment that we are creating, and the facilities for storing, querying, remotely processing, and sharing the image data that constitute the bulk of the brain data that scientists are producing.

Web-based Multi-center Data Management System for Clinical Neuroscience Research

Journal of Medical Systems, 2010

Modern clinical research often involves multicenter studies, large and heterogeneous data flux, and intensive demands of collaboration, security and quality assurance. In the absence of commercial or academic management systems, we designed an open-source system to meet these requirements. Based on the Apache-PHP-MySQL platform on a Linux server, the system allows multiple users to access the database from any location on the internet using a web browser, and requires no specialized computer skills. Multi-level security system is implemented to safeguard the protected health information and allow partial or full access to the data by individual or class privilege. The system stores and manipulates various types of data including images, scanned documents, laboratory data and clinical ratings. Built-in functionality allows for various search, quality control, analytic data operations, visit scheduling and visit reminders. This approach offers a solution to a growing need for management of large multi-center clinical studies.