Allyson Ettinger (original) (raw)

May 2023. We received a Best Paper Award at EACL 2023 for our COMPS paper -- congratulations Kanishka!

May 2023. Paper by Kanishka, "COMPS: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models", to be presented at EACL 2023!

March 2023. Gave a talk at the Transdisciplinary Institute in Applied Data Science (TRIADS) seminar series at Washington University St. Louis.

December 2022. Gave a keynote talk at CoNLL (virtually) in Abu Dhabi, UAE.

December 2022. Paper by Jiaxuan on "Heuristic interpretation as rational inference: A computational model of the N400 and P600 in language processing", accepted to Cognition!

December 2022. Paper by Jiaxuan and Lang on counterfactual reasoning in pre-trained LMs, presented at Workshop on neuro Causal and Symbolic AI (nCSI) at NeurIPS.

November 2022. Gave a talk at the ILFC monthly online seminar.

October 2022. Paper by Sanghee and Lang presented at COLING 2022, on knowledge of dialogue response dynamics in pre-trained LMs!

September 2022. Gave a talk at the UT Austin.

September 2022. Gave a talk at the McGill Linguistics Colloquium.

August 2022. Gave an invited talk at the UC Irvine Cognitive Modeling Summer School.

July 2022. Paper presentation by Kanishka at CogSci 2022, on property induction in neural LMs!

July 2022. Gave a keynote talk at *SEM conference.

June 2022. Invited speaker and panelist, "The Challenge of Compositionality for AI" workshop.

June 2022. Gave an invited talk at Microsoft Cognitive Services Research Group Distinguished Talk Series.

May 2022. Gave a keynote talk at DeeLIO ACL workshop.

April 2022. Invited talks at UMD CLIP lab, Notre Dame NLP seminar, and UPenn CLunch.

March 2022. Gave an invited presentation at the Mini-workshop on Linguistic Ambiguity and Deep Learning.

February 2022. Invited talks at the Stanford NLP Seminar and the CMU brAIn Seminar.

November 2021. Three paper presentations coming up at EMNLP 2021: 1) Lalchand on testing robustness of meaning representations in pre-trained LMs in the main conference, 2) Lalchand and Yan on pragmatic competence in pre-trained LMs at CoNLL, and 3) Qinxuan on encoding of syntactic anomaly information in pre-trained sentence embeddings at BlackBoxNLP.

September 2021. Invited talks at the OSU Department of Linguistics Colloquium and the van Schijndel research group at Cornell.

August 2021. Paper presentation by Lang on impact of fine-tuning on semantic compostion in transformers, in Findings of ACL and presented at Rep4NLP workshop.

July 2021. Paper presentation by Kanishka on whether language models learn typicality, presented at CogSci 2021.

May 2021. Gave a talk for the UChicago MACSS Computational Social Science Workshop.

May 2021. Gave an interview with the TWIML podcast.

May 2021. Served as a panelist for the ICLR Brain2AI workshop panel, "How can findings about the brain improve AI systems?".

April 2021. Gave a talk for the NYU NLP/Text-as-Data speaker series.

April 2021. PhD student Lang Yu has successfully defended his dissertation, Analyzing and Improving Compositionality in Neural Language Models!

February 2021. SCiL 2021 (Meeting of the Society for Computation in Linguistics) was a success! Thank you to my fellow organizers, and to PC members, authors, and the many who attended the virtual conference!

February 2021. Gave a talk for the English Literature and Language Department of Dongguk University.

November 2020. Three papers at EMNLP 2020 (Conference on Empirical Methods in Natural Language Processing): 1) assessing phrase representation and composition in transformers, 2) applying semantic priming to examine lexical sensitivity in BERT (Findings/BlackBoxNLP), and 3) long document coreference resolution.

October 2020. Gave a talk at the MIT CompLang discussion group.

September 2020. Gave a talk at the MIT Computational Psycholinguistics Lab.

September 2020. Gave a talk for the Georgia Tech Workshop on Language, Technology, & Society. A recording of the talk can be found here.

July 2020. Three papers at ACL 2020 (Association for Computational Linguistics annual meeting): 1) probing contextual embeddings, 2) diagnostics for BERT (TACL paper), and 3) tracking entities with memory-augmented neural networks.

May 2020. Gave a talk in the Northwestern University Linguistics Department colloquium series.

January 2020. Paper now out in TACL (Transactions of the Association for Computational Linguistics): What BERT is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models.