Lauren A Fromont | Université de Montréal (original) (raw)
Papers by Lauren A Fromont
Routledge eBooks, Dec 4, 2023
medRxiv (Cold Spring Harbor Laboratory), Mar 18, 2024
arXiv (Cornell University), Aug 10, 2023
PLOS computational biology/PLoS computational biology, Mar 1, 2024
European Radiology Experimental
Artificial intelligence (AI) is transforming the field of medical imaging and has the potential t... more Artificial intelligence (AI) is transforming the field of medical imaging and has the potential to bring medicine from the era of ‘sick-care’ to the era of healthcare and prevention. The development of AI requires access to large, complete, and harmonized real-world datasets, representative of the population, and disease diversity. However, to date, efforts are fragmented, based on single–institution, size-limited, and annotation-limited datasets. Available public datasets (e.g., The Cancer Imaging Archive, TCIA, USA) are limited in scope, making model generalizability really difficult. In this direction, five European Union projects are currently working on the development of big data infrastructures that will enable European, ethically and General Data Protection Regulation-compliant, quality-controlled, cancer-related, medical imaging platforms, in which both large-scale data and AI algorithms will coexist. The vision is to create sustainable AI cloud-based platforms for the deve...
European Radiology Experimental
A huge amount of imaging data is becoming available worldwide and an incredible range of possible... more A huge amount of imaging data is becoming available worldwide and an incredible range of possible improvements can be provided by artificial intelligence algorithms in clinical care for diagnosis and decision support. In this context, it has become essential to properly manage and handle these medical images and to define which metadata have to be considered, in order for the images to provide their full potential. Metadata are additional data associated with the images, which provide a complete description of the image acquisition, curation, analysis, and of the relevant clinical variables associated with the images. Currently, several data models are available to describe one or more subcategories of metadata, but a unique, common, and standard data model capable of fully representing the heterogeneity of medical metadata has not been yet developed. This paper reports the state of the art on metadata models for medical imaging, the current limitations and further developments, and...
Briefings In Bioinformatics
Since its launch in 2008, the European Genome–Phenome Archive (EGA) has been leading the archivin... more Since its launch in 2008, the European Genome–Phenome Archive (EGA) has been leading the archiving and distribution of human identifiable genomic data. In this regard, one of the community concerns is the potential usability of the stored data, as of now, data submitters are not mandated to perform any quality control (QC) before uploading their data and associated metadata information. Here, we present a new File QC Portal developed at EGA, along with QC reports performed and created for 1 694 442 files [Fastq, sequence alignment map (SAM)/binary alignment map (BAM)/CRAM and variant call format (VCF)] submitted at EGA. QC reports allow anonymous EGA users to view summary-level information regarding the files within a specific dataset, such as quality of reads, alignment quality, number and type of variants and other features. Researchers benefit from being able to assess the quality of data prior to the data access decision and thereby, increasing the reusability of data (https://e...
Nucleic Acids Research, 2021
The European Genome-phenome Archive (EGA - https://ega-archive.org/) is a resource for long term ... more The European Genome-phenome Archive (EGA - https://ega-archive.org/) is a resource for long term secure archiving of all types of potentially identifiable genetic, phenotypic, and clinical data resulting from biomedical research projects. Its mission is to foster hosted data reuse, enable reproducibility, and accelerate biomedical and translational research in line with the FAIR principles. Launched in 2008, the EGA has grown quickly, currently archiving over 4,500 studies from nearly one thousand institutions. The EGA operates a distributed data access model in which requests are made to the data controller, not to the EGA, therefore, the submitter keeps control on who has access to the data and under which conditions. Given the size and value of data hosted, the EGA is constantly improving its value chain, that is, how the EGA can contribute to enhancing the value of human health data by facilitating its submission, discovery, access, and distribution, as well as leading the desig...
International Journal of Psychophysiology, 2016
The nature of cognitive processes underlying the N400 component in event-related brain potentials... more The nature of cognitive processes underlying the N400 component in event-related brain potentials (ERPs) is still controversial. Semantic priming, which reduces N400 amplitudes, is not systematically observed in tasks that inhibit conscious processing of the prime (see e.g., Royle, Drury, Bourguignon, & Steinhauer, 2012 for a review). However, when the prime is consciously perceived, strategic effects might arise. Much evidence in off-line word-recognition tasks points to post-lexical semantic integration in working memory, but other studies have argued in favour of automatic intra-lexical priming by means of spreading activation. To address this question, we manipulated stimuli lists in order to promote or demote semantic priming.
Methods: We modeled our ERP priming experiment (in French) on a behavioural priming study (McKoon & Ratcliff, 1995) in which a minority of strongly related clearly perceptible prime-target pairs of one semantic dimension (e.g., a member-category relation such as ‘hammer-tool’) were embedded in lists dominated by a majority of clearly perceptible pairs of a different semantic relation (e.g., part-whole: ‘finger-hand’), and vice versa. Eleven French-speaking adults participated in a semantic decision task (“Are the two words related?”) while their EEG was recorded (Neuroscan SynAmps2; ERP data analysis with EEProbe; ANT, Enschede Netherlands).
Results: Significantly reduced attenuations of the N400 were found when a related word pair was in the minority than when it was in the majority semantic dimension of list items, where the same items showed strong N400 attenuations. Our ERP data are in line with McKoon and Ratcliff’s behavioural data and suggest that, even with short stimulus onset asynchronies, priming as reflected by the N400 is not simply a function of spreading activation in long-term semantic memory. Instead, semantic priming itself can be primed by the type of semantic relationship found in other list items. In addition to contrasting our results to apparently conflicting data from other studies, we will explain why our findings are important for research investigating syntax-semantics interactions in sentence processing, including those investigating ‘semantic blocking’ effects (e.g., Steinhauer & Drury, 2012).
International Journal of Psychophysiology, 2018
International Congress of Phonetic Sciences, 2015
Brain and Language, 2020
Late second language (L2) learners report difficulties in specific linguistic areas such as synta... more Late second language (L2) learners report difficulties in specific linguistic areas such as syntactic processing, presumably because brain plasticity declines with age (following the critical period hypothesis). While there is also evidence that L2 learners can achieve native-like online-processing with sufficient proficiency (following the convergence hypothesis), considering multiple mediating factors and their impact on language processing has proven challenging. We recorded EEG while native (n = 36) and L2-speakers of French (n = 40) read sentences that were either well-formed or contained a syntactic-category error. a lexical-semantic anomaly, or both. Consistent with the critical period hypothesis, group differences revealed that while native speakers elicited a biphasic N400-P600 in response to ungrammatical sentences, L2 learners as a group only elicited an N400. However, individual data modeling using a Random Forests approach revealed that language exposure and proficiency are the most reliable predictors in explaining ERP responses, with N400 and P600 effects becoming larger as exposure to French as well as proficiency increased, as predicted by the convergence hypothesis.
Frontiers in Communication, 2018
Routledge eBooks, Dec 4, 2023
medRxiv (Cold Spring Harbor Laboratory), Mar 18, 2024
arXiv (Cornell University), Aug 10, 2023
PLOS computational biology/PLoS computational biology, Mar 1, 2024
European Radiology Experimental
Artificial intelligence (AI) is transforming the field of medical imaging and has the potential t... more Artificial intelligence (AI) is transforming the field of medical imaging and has the potential to bring medicine from the era of ‘sick-care’ to the era of healthcare and prevention. The development of AI requires access to large, complete, and harmonized real-world datasets, representative of the population, and disease diversity. However, to date, efforts are fragmented, based on single–institution, size-limited, and annotation-limited datasets. Available public datasets (e.g., The Cancer Imaging Archive, TCIA, USA) are limited in scope, making model generalizability really difficult. In this direction, five European Union projects are currently working on the development of big data infrastructures that will enable European, ethically and General Data Protection Regulation-compliant, quality-controlled, cancer-related, medical imaging platforms, in which both large-scale data and AI algorithms will coexist. The vision is to create sustainable AI cloud-based platforms for the deve...
European Radiology Experimental
A huge amount of imaging data is becoming available worldwide and an incredible range of possible... more A huge amount of imaging data is becoming available worldwide and an incredible range of possible improvements can be provided by artificial intelligence algorithms in clinical care for diagnosis and decision support. In this context, it has become essential to properly manage and handle these medical images and to define which metadata have to be considered, in order for the images to provide their full potential. Metadata are additional data associated with the images, which provide a complete description of the image acquisition, curation, analysis, and of the relevant clinical variables associated with the images. Currently, several data models are available to describe one or more subcategories of metadata, but a unique, common, and standard data model capable of fully representing the heterogeneity of medical metadata has not been yet developed. This paper reports the state of the art on metadata models for medical imaging, the current limitations and further developments, and...
Briefings In Bioinformatics
Since its launch in 2008, the European Genome–Phenome Archive (EGA) has been leading the archivin... more Since its launch in 2008, the European Genome–Phenome Archive (EGA) has been leading the archiving and distribution of human identifiable genomic data. In this regard, one of the community concerns is the potential usability of the stored data, as of now, data submitters are not mandated to perform any quality control (QC) before uploading their data and associated metadata information. Here, we present a new File QC Portal developed at EGA, along with QC reports performed and created for 1 694 442 files [Fastq, sequence alignment map (SAM)/binary alignment map (BAM)/CRAM and variant call format (VCF)] submitted at EGA. QC reports allow anonymous EGA users to view summary-level information regarding the files within a specific dataset, such as quality of reads, alignment quality, number and type of variants and other features. Researchers benefit from being able to assess the quality of data prior to the data access decision and thereby, increasing the reusability of data (https://e...
Nucleic Acids Research, 2021
The European Genome-phenome Archive (EGA - https://ega-archive.org/) is a resource for long term ... more The European Genome-phenome Archive (EGA - https://ega-archive.org/) is a resource for long term secure archiving of all types of potentially identifiable genetic, phenotypic, and clinical data resulting from biomedical research projects. Its mission is to foster hosted data reuse, enable reproducibility, and accelerate biomedical and translational research in line with the FAIR principles. Launched in 2008, the EGA has grown quickly, currently archiving over 4,500 studies from nearly one thousand institutions. The EGA operates a distributed data access model in which requests are made to the data controller, not to the EGA, therefore, the submitter keeps control on who has access to the data and under which conditions. Given the size and value of data hosted, the EGA is constantly improving its value chain, that is, how the EGA can contribute to enhancing the value of human health data by facilitating its submission, discovery, access, and distribution, as well as leading the desig...
International Journal of Psychophysiology, 2016
The nature of cognitive processes underlying the N400 component in event-related brain potentials... more The nature of cognitive processes underlying the N400 component in event-related brain potentials (ERPs) is still controversial. Semantic priming, which reduces N400 amplitudes, is not systematically observed in tasks that inhibit conscious processing of the prime (see e.g., Royle, Drury, Bourguignon, & Steinhauer, 2012 for a review). However, when the prime is consciously perceived, strategic effects might arise. Much evidence in off-line word-recognition tasks points to post-lexical semantic integration in working memory, but other studies have argued in favour of automatic intra-lexical priming by means of spreading activation. To address this question, we manipulated stimuli lists in order to promote or demote semantic priming.
Methods: We modeled our ERP priming experiment (in French) on a behavioural priming study (McKoon & Ratcliff, 1995) in which a minority of strongly related clearly perceptible prime-target pairs of one semantic dimension (e.g., a member-category relation such as ‘hammer-tool’) were embedded in lists dominated by a majority of clearly perceptible pairs of a different semantic relation (e.g., part-whole: ‘finger-hand’), and vice versa. Eleven French-speaking adults participated in a semantic decision task (“Are the two words related?”) while their EEG was recorded (Neuroscan SynAmps2; ERP data analysis with EEProbe; ANT, Enschede Netherlands).
Results: Significantly reduced attenuations of the N400 were found when a related word pair was in the minority than when it was in the majority semantic dimension of list items, where the same items showed strong N400 attenuations. Our ERP data are in line with McKoon and Ratcliff’s behavioural data and suggest that, even with short stimulus onset asynchronies, priming as reflected by the N400 is not simply a function of spreading activation in long-term semantic memory. Instead, semantic priming itself can be primed by the type of semantic relationship found in other list items. In addition to contrasting our results to apparently conflicting data from other studies, we will explain why our findings are important for research investigating syntax-semantics interactions in sentence processing, including those investigating ‘semantic blocking’ effects (e.g., Steinhauer & Drury, 2012).
International Journal of Psychophysiology, 2018
International Congress of Phonetic Sciences, 2015
Brain and Language, 2020
Late second language (L2) learners report difficulties in specific linguistic areas such as synta... more Late second language (L2) learners report difficulties in specific linguistic areas such as syntactic processing, presumably because brain plasticity declines with age (following the critical period hypothesis). While there is also evidence that L2 learners can achieve native-like online-processing with sufficient proficiency (following the convergence hypothesis), considering multiple mediating factors and their impact on language processing has proven challenging. We recorded EEG while native (n = 36) and L2-speakers of French (n = 40) read sentences that were either well-formed or contained a syntactic-category error. a lexical-semantic anomaly, or both. Consistent with the critical period hypothesis, group differences revealed that while native speakers elicited a biphasic N400-P600 in response to ungrammatical sentences, L2 learners as a group only elicited an N400. However, individual data modeling using a Random Forests approach revealed that language exposure and proficiency are the most reliable predictors in explaining ERP responses, with N400 and P600 effects becoming larger as exposure to French as well as proficiency increased, as predicted by the convergence hypothesis.
Frontiers in Communication, 2018