Parsing Research Papers - Academia.edu (original) (raw)
This study addresses the critical need for an accurate aspect-based sentiment-analysis (ABSA) model to understand sentiments effectively. The existing ABSA models often face challenges in accurately extracting aspects and determining... more
This study addresses the critical need for an accurate aspect-based
sentiment-analysis (ABSA) model to understand sentiments effectively. The existing ABSA models often face challenges in accurately extracting aspects and determining sentiment polarity from textual data. Therefore, we propose a novel approach leveraging latent-Dirichlet-allocation (LDA) for aspect extraction and transformer-based bidirectional-encoder-representations from transformers (TF-BERT) for sentiment-polarity evaluation. The experiments were carried out on SemEval 2014 laptop and restaurant datasets. Also, a multi-domain dataset was generated by combining SemEval 2014, Amazon, and hospital reviews. The results demonstrate the superiority of the LDA-TF-BERT model, achieving 82.19% accuracy and 79.52% Macro-F1 score for the laptop task and 86.26% accuracy of 87.26% and 81.27% for Macro-F1 score for the restaurant task. This showcases the model's robustness and effectiveness in accurately analyzing textual data and extracting meaningful insights. The novelty of our work lies in combining LDA and TF-BERT, providing a comprehensive and accurate ABSA solution for various industries, thereby contributing significantly to the advancement of sentiment analysis techniques.
Background Construction of networks from cross-sectional biological data is increasingly common. Many recent methods have been based on Gaussian graphical modeling, and prioritize estimation of conditional pairwise dependencies among... more
Background Construction of networks from cross-sectional biological data is increasingly common. Many recent methods have been based on Gaussian graphical modeling, and prioritize estimation of conditional pairwise dependencies among nodes in the network. However, challenges remain on how specific paths through the resultant network contribute to overall ‘network-level’ correlations. For biological applications, understanding these relationships is particularly relevant for parsing structural information contained in complex subnetworks. Results We propose the pair-path subscore (PPS), a method for interpreting Gaussian graphical models at the level of individual network paths. The scoring is based on the relative importance of such paths in determining the Pearson correlation between their terminal nodes. PPS is validated using human metabolomics data from the Hyperglycemia and adverse pregnancy outcome (HAPO) study, with observations confirming well-documented biological relation...
Clinical parsing is useful in medical domain .Clinical narratives are difficult to understand as it is in unstructured format .Medical Natural language processing systems are used to make these clinical narratives in readable format.... more
Clinical parsing is useful in medical domain .Clinical narratives are difficult to understand as it is in unstructured format .Medical Natural language processing systems are used to make these clinical narratives in readable format. Clinical Parser is the combination of natural language processing and medical lexicon .For making clinical narrative understandable parsing technique is used .In this paper we are discussing about constituency parser for clinical narratives, which is based on phrase structured grammar. This parser convert unstructured clinical narratives into structured report. This paper focus on clinical sentences which is in unstructured format after parsing convert into structured format. For each sentence recall ,precision and bracketing f- measure are calculated .
Background. Lexical knowledge, and in particular knowledge on multi-word expressions, is at the cornerstone of language applications such as syntactic parsing or machine translation. Corpus-driven lexical acquisition is one of the major... more
Background. Lexical knowledge, and in particular knowledge on multi-word expressions, is at the cornerstone of language applications such as syntactic parsing or machine translation. Corpus-driven lexical acquisition is one of the major means to create such knowledge, in order to build or consolidate dictionaries and similar types of lexical resources. We describe ongoing work devoted to the corpus-based extraction of multi-word expressions – in particular, collocations – for the Romanian language. Romanian is since 2002 one of the 23 official languages of the European Union; it is the native language or around 24 million people, and is currently ranked 8 th in the list of most spoken European languages worldwide, after Spanish (405 million native speakers), English (360), Portuguese (215), German (89), French (74), Italian (59), and Polish (40) 1 . This high rank contrasts, however, with the relatively scarce development of language resources and tools compared to other languages.
- by George Kuchel
- •
- Aging, Parsing
We describe a GB parser implemented along the lines of those written by Fong [Fong91] and Dorr [Dorr87]. The phrase structure recovery component is an implementation of Tomita''s generalized LR parsing algorithm (described in... more
We describe a GB parser implemented along the lines of those written by Fong [Fong91] and Dorr [Dorr87]. The phrase structure recovery component is an implementation of Tomita''s generalized LR parsing algorithm (described in [Tomi86]), with recursive control flow (similar to Fong''s implementation). The major principles implemented are government, binding, bounding, trace theory, case theory, theta-theory, and barriers. The particular version of GB theory we use is that described by Haegeman [Haeg91]. The parser is minimal in the sense that it implements the major principles needed in a GB parser, and has fairly good coverage of linguistically interesting portions of the English language.
Automated floor plan analysis and recognition have long been focal points in computer science research. Recently, there has been a notable increase in the use of learning-based techniques to automatically reorganize floor plans from... more
Automated floor plan analysis and recognition have long been focal points in computer science research. Recently, there has been a notable increase in the use of learning-based techniques to automatically reorganize floor plans from raster images. This advancement aims to extract valuable insights from architectural drawings, which are essential for understanding building layouts and their intended functions. These drawings often feature a variety of notations and constraints, and the lack of standardized notation leads to significant variability in both style and
semantics across different floor plans. Addressing this challenge is a key focus of this review. This paper provides an extensive literature survey to tackle the issue of variability in floor plans. The review concentrates on methodologies that treat floor plans as raster images, with particular attention to learning-based approaches. By offering concise summaries of datasets, research scopes, and specific tasks, this review aims to
guide future research and development in the fields of construction and design. The in-depth examination of automatic floor plan analysis and recognition methods presented here contributes to the evolving field of
computer-assisted architectural understanding and design.
- by IJANA Journal
- •
In this paper, we propose a new data compression based ECG biometric method for personal identification and authentication. The ECG is an emerging biometric that does not need liveliness verification. There is strong evidence that ECG... more
In this paper, we propose a new data compression based ECG biometric method for personal identification and authentication. The ECG is an emerging biometric that does not need liveliness verification. There is strong evidence that ECG signals contain sufficient discriminative information to allow the identification of individuals from a large population. Most approaches rely on ECG data and the fiducia of different parts of the heartbeat waveform. However nonfiducial approaches have proved recently to be also effective, and have the advantage of not relying critically on the accurate extraction of fiducia. We propose a non-fiducial method based on the Ziv-Merhav cross parsing algorithm for symbol sequences (strings). Our method uses a string similarity measure obtained with a data compression algorithm. We present results on real data, one-lead ECG, acquired during a concentration task, from 19 healthy individuals, on which our approach achieves 100% subject identification rate and ...
- by Aln Fred
- •
- Computer Science, Parsing
Since a tweet is limited to 140 characters, it is ambiguous and difficult for traditional Natural Language Processing (NLP) tools to analyse. This research presents KeyXtract which enhances the machine learning based Stanford CoreNLP... more
Since a tweet is limited to 140 characters, it is ambiguous and difficult for traditional Natural Language Processing (NLP) tools to analyse. This research presents KeyXtract which enhances the machine learning based Stanford CoreNLP Part-of-Speech (POS) tagger with the Twitter model to extract essential keywords from a tweet. The system was developed using rule-based parsers and two corpora. The data for the research was obtained from a Twitter profile of a telecommunication company. The system development consisted of two stages. At the initial stage, a domain specific corpus was compiled after analysing the tweets. The POS tagger extracted the Noun Phrases and Verb Phrases while the parsers removed noise and extracted any other keywords missed by the POS tagger. The system was evaluated using the Turing Test. After it was tested and compared against Stanford CoreNLP, the second stage of the system was developed addressing the shortcomings of the first stage. It was enhanced using...
To define the relationship between aspects of memory concerning encoding and recall of short texts and hypnosis, standardized stories were narrated to 12 subjects, both during ordinary state of consciousness and after hypnotic induction... more
To define the relationship between aspects of memory concerning encoding and recall of short texts and hypnosis, standardized stories were narrated to 12 subjects, both during ordinary state of consciousness and after hypnotic induction by means of the Stanford Hypnotic Susceptibility Scale (Form C). The narrative material used as a stimulus was based on several stories taken from popular oral tradition, previously analyzed according to the classic criteria proposed by Rumelhart in 1975 and Mandler and Johnson in 1977. The subjects' memory performance during both experimental conditions was tape-recorded and compared with the analysis of the original stories (Terminal Nodes) as well as with the higher linguistic structures of the scheme (Basic Nodes), according to Rumelhart's typology. During hypnosis, the subjects recalled significantly fewer narrative elements at both levels of analysis (Terminal Nodes and Basic Nodes). We conclude that hypnosis does not enhance recent mem...
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in... more
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in an unsupervised bottom-up manner. We systematically investigate the modules in this framework and particularly propose diverse temporal pooling schemes, in order to realize accurate temporal action localization. We demonstrate our method on two motion parsing tasks: temporal action segmentation and abnormal behavior detection. The experimental results indicate that the proposed framework is significantly more effective than the other related state-of-the-art methods on several datasets.
Operative notes contain rich information about techniques, instruments, and materials used in procedures. To assist development of effective information extraction (IE) techniques for operative notes, we investigated the sublanguage used... more
Operative notes contain rich information about techniques, instruments, and materials used in procedures. To assist development of effective information extraction (IE) techniques for operative notes, we investigated the sublanguage used to describe actions within the operative report 'procedure description' section. Deep parsing results of 362,310 operative notes with an expanded Stanford parser using the SPECIALIST Lexicon resulted in 200 verbs (92% coverage) including 147 action verbs. Nominal action predicates for each action verb were gathered from WordNet, SPECIALIST Lexicon, New Oxford American Dictionary and Stedman's Medical Dictionary. Coverage gaps were seen in existing lexical, domain, and semantic resources (Unified Medical Language System (UMLS) Metathesaurus, SPECIALIST Lexicon, WordNet and FrameNet). Our findings demonstrate the need to construct surgical domain-specific semantic resources for IE from operative notes.
ABSTRACTThis study examines the reading patterns of native speakers (NSs) and high-level (Chinese) nonnative speakers (NNSs) on three English sentence types involving temporarily ambiguous structural configurations. The reading patterns... more
ABSTRACTThis study examines the reading patterns of native speakers (NSs) and high-level (Chinese) nonnative speakers (NNSs) on three English sentence types involving temporarily ambiguous structural configurations. The reading patterns on each sentence type indicate that both NSs and NNSs were biased toward specific structural interpretations. These results are interpreted as evidence that both first-language and second-language (L2) sentence comprehension is guided (at least in part) by structure-based parsing strategies and, thus as counterevidence to the claim that NNSs are largely limited to rudimentary (or “shallow”) syntactic computation during online L2 sentence processing.
O estudo tem o objetivo de relatar experiências de situações vividas pelas autoras ao aplicarem os fundamentos da teoria de Parse "Human Becoming" e apresentar seus princípios e conceitos visando divulgá-la. Após estudo teórico... more
O estudo tem o objetivo de relatar experiências de situações vividas pelas autoras ao aplicarem os fundamentos da teoria de Parse "Human Becoming" e apresentar seus princípios e conceitos visando divulgá-la. Após estudo teórico para a compreensão da teoria, os seus princípios foram aplicados na prática. As autoras constataram que isto implica em mudança de valores e crenças do enfermeiro, transformando sua visão sobre o ser humano e sua saúde, tornando o cuidado mais humanístico. A experiência vivenciada possibilitou amadurecimento pessoal e profissional das autoras.
Abstract. DeSR is a statistical transition-based dependency parser which learns from annotated corpora which actions to perform for building parse trees while scanning a sentence. We describe recent improvements to the parser, in... more
Abstract. DeSR is a statistical transition-based dependency parser which learns from annotated corpora which actions to perform for building parse trees while scanning a sentence. We describe recent improvements to the parser, in particular stacked parsing, exploiting a beam search strategy and using a Multilayer Perceptron classifier. For the Evalita 2009 Dependency Parsing task DesR was configured to use a combination of stacked parsers. The stacked combination achieved the best accuracy scores in both the ...
Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 3947, Portland, Oregon, 23 June 2011. cO2011 Association for Computational Linguistics Detecting Forum Authority Claims in Online Discussions Alex Marin, Bin... more
Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 3947, Portland, Oregon, 23 June 2011. cO2011 Association for Computational Linguistics Detecting Forum Authority Claims in Online Discussions Alex Marin, Bin Zhang, Mari Ostendorf ...