Tai Chi improves balance and prevents falls in people with Parkinson's disease (original) (raw)

Variable Outcome of Acute Viral Hepatitis in Diabetic and Nondiabetic Patients in Bangladesh

Euroasian Journal of Hepato-Gastroenterology

Diabetes mellitus (DM) are common in Bangladesh, and this country is also well-known for frequent outbreaks of acute viral hepatitis (AVH). The study presented here was designed for the clinical courses of acute hepatitis with and without DM. A total of 300 patients with AVH were enrolled into two groups; group A; patients of AVH with DM (N = 140) and group B; patients with AVH without DM (N = 160). There was no significant difference regarding age, sex, and levels of alanine aminotransferase (ALT) between the two groups. The main cause of AVH was hepatitis E virus (HEV) in 100 and 112 patients of groups A and B, respectively. Jaundice persisted for more than 6 months in 68 of 140 (49%) patients of group A, whereas, this was found in only 11 of 160 patients of group B. Forty-two patients of group A showed evidence of esophageal varices; however, the endoscopic assessment did not reveal any abnormality in patients with group B. Moderate to several hepatic fibrosis was seen in 19 of 140 patients with group A, however, these were not detected in any patient of group B (patient with AVH without DM). Even more important is the fact that four patients of group A died of liver failure, whereas there was no mortality in any patient of group B. The study presented here indicates that all patients with DM with superimposed AVH should be carefully followed up with the possibility of development of severe liver diseases and even mortality.

Integration of Spatial Point Features with Linear Referencing Methods

Transportation Research Record, 2003

However, many data-storage and reporting methods at state departments of transportation and other transportation agencies rely on linear referencing methods (LRMs) for managing transportation data. Consequently, GPS data must be able to coexist with linear referencing systems (LRSs). Unfortunately, the two systems are fundamentally different in the way they collect, integrate, and manipulate data. For spatial data collected with GPS to be integrated into an LRS or shared among LRMs, several issues must be addressed. Various issues are discussed for integrating point features from a typical inventory system with an LRM or between LRMs, including two-or three-dimensional GPS to one-dimensional LRM, linear offset error, lateral offset error, matching to the wrong segment, and locating points between LRMs.

Deep learning meets ontologies: experiments to anchor the cardiovascular disease ontology in the biomedical literature

Journal of biomedical semantics, 2018

Automatic identification of term variants or acceptable alternative free-text terms for gene and protein names from the millions of biomedical publications is a challenging task. Ontologies, such as the Cardiovascular Disease Ontology (CVDO), capture domain knowledge in a computational form and can provide context for gene/protein names as written in the literature. This study investigates: 1) if word embeddings from Deep Learning algorithms can provide a list of term variants for a given gene/protein of interest; and 2) if biological knowledge from the CVDO can improve such a list without modifying the word embeddings created. We have manually annotated 105 gene/protein names from 25 PubMed titles/abstracts and mapped them to 79 unique UniProtKB entries corresponding to gene and protein classes from the CVDO. Using more than 14 M PubMed articles (titles and available abstracts), word embeddings were generated with CBOW and Skip-gram. We setup two experiments for a synonym detection...

Using the words/leafs ratio in the DOM tree for content extraction

The Journal of Logic and Algebraic Programming, 2013

The main content in a webpage is usually centered and visible without the need to scroll. It is often rounded by the navigation menus of the website and it can include advertisements, panels, banners, and other not necessarily related information. The process to automatically extract the main content of a webpage is called content extraction. Content extraction is an area of research of widely interest due to its many applications. Concretely, it is useful not only for the final human user, but it is also frequently used as a preprocessing stage of different systems (i.e., robots, indexers, crawlers, etc.) that need to extract the main content of a web document to avoid the treatment and processing of other useless information. In this work we present a new technique for content extraction that is based on the information contained in the DOM tree. The technique analyzes the hierarchical relations of the elements in the webpage and the distribution of textual information in order to identify the main block of content. Thanks to the hierarchy imposed by the DOM tree the technique achieves a considerable recall and precision. Using the DOM structure for content extraction gives us the benefits of other approaches based on the syntax of the webpage (such as characters, words and tags), but it also gives us a very precise information regarding the related components in a block (not necessarily textual such as images or videos), thus, producing very cohesive blocks.

High quality, small molecule-activity datasets for kinase research

F1000Research, 2016

Kinases regulate cell growth, movement, and death. Deregulated kinase activity is a frequent cause of disease. The therapeutic potential of kinase inhibitors has led to large amounts of published structure activity relationship (SAR) data. Bioactivity databases such as the Kinase Knowledgebase (KKB), WOMBAT, GOSTAR, and ChEMBL provide researchers with quantitative data characterizing the activity of compounds across many biological assays. The KKB, for example, contains over 1.8M kinase structure-activity data points reported in peer-reviewed journals and patents. In the spirit of fostering methods development and validation worldwide, we have extracted and have made available from the KKB 258K structure activity data points and 76K associated unique chemical structures across eight kinase targets. These data are freely available for download within this data note.

An Energy Efficient Cluster-Head Formation and Medium Access Technique in Multi-Hop Wban

ICTACT Journal on Communication Technology

Identification and verification have always been at the heart of financial services and payments, which is even more the case in the digital age. So, while banks have long been trusted to keep money safe, is there a new role for them as stewards of digital identity? Governments should, in consultation with the private sector, develop a national identity strategy based on a federated-style model in which public and private sector identity providers would compete to supply trusted digital identities to individuals and businesses. Back then, when the world seemed smaller, slower and more local, physical identity documents were adequate for face-to-face transactions. However, the Internet changed everything. It shrank distances, created new business models and generally sped everything up. From the innovation lifecycle to access to information, processes and the clock-speed on risk, the Internet has accelerated everything. The use of Internet in doing business has grown over the years in Africa and Zambia in particular. As such, the incidences of online identity theft have grown too. Identity theft is becoming a prevalent and increasing problem in Zambia. An identity thief only requires certain identity information to decimate a victim's life and credit. This research proposes to identify and extract various forms of identity attributes from various sources used in the physical and cyberspace to identity users accessing the financial services through extracting identity attributes from the various forms of identity credentials and application forms. Finally, design a digital identity model based on Shannon's Information theory and Euclidean metric based Euclidean Distance Geometry (EDG) to be used for quantifying, implementation and validating of extracted identity attributes from various forms of identity credentials and application forms, in an effective way.

Expressing History through a Geo-Spatial Ontology

ISPRS International Journal of Geo-Information

Conventional Geographical Information Systems (GIS) software struggles to represent uncertain and contested historical knowledge. An ontology, meaning a semantic structure defining named entities, and explicit and typed relationships, can be constructed in the absence of locational data, and spatial objects can be attached to this structure if and when they become available. We describe the overall architecture of the Great Britain Historical GIS, and the PastPlace Administrative Unit Ontology that forms its core. Then, we show how particular historical geographies can be represented within this architecture through two case studies, both emphasizing entity definition and especially the application of a multi-level typology, in which each “unit” has an unchanging “type” but also a time-variant “status”. The first includes the linked systems of Poor Law unions and registration districts in 19th century England and Wales, in which most but not all unions and districts were coterminous...

Bovine Genome Database: new tools for gleaning function from the Bos taurus genome

Nucleic Acids Research, 2015

We report an update of the Bovine Genome Database (BGD) (http://BovineGenome.org). The goal of BGD is to support bovine genomics research by providing genome annotation and data mining tools. We have developed new genome and annotation browsers using JBrowse and WebApollo for two Bos taurus genome assemblies, the reference genome assembly (UMD3.1.1) and the alternate genome assembly (Btau 4.6.1). Annotation tools have been customized to highlight priority genes for annotation, and to aid annotators in selecting gene evidence tracks from 91 tissue specific RNAseq datasets. We have also developed BovineMine, based on the InterMine data warehousing system, to integrate the bovine genome, annotation, QTL, SNP and expression data with external sources of orthology, gene ontology, gene interaction and pathway information. BovineMine provides powerful query building tools, as well as customized query templates, and allows users to analyze and download genome-wide datasets. With BovineMine, bovine researchers can use orthology to leverage the curated gene pathways of model organisms, such as human, mouse and rat. BovineMine will be especially useful for gene ontology and pathway analyses in conjunction with GWAS and QTL studies.