Work

Overview

This section covers my academic output (publications, non-archival papers, and presentations and posters), media interviews, industry employment history, teaching experience, and other projects. For my full CV in PDF format, click here.

 


 

Publications

  • Michaelov, J. A., Arnett, C., Chang, T. A., & Bergen, B. K. (2023). ‘Structural priming demonstrates abstract grammatical representations in multilingual language models’. The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023). [arXiv]

  • Michaelov, J. A. & Bergen, B. K. (2023). ‘Emergent inabilities? Inverse scaling over the course of pretraining’. Findings of the Association for Computational Linguistics: EMNLP 2023. [arXiv]

  • Michaelov, J. A. & Bergen, B. K. (2023). ‘Ignoring the alternatives: The N400 is sensitive to stimulus preactivation alone’. Cortex. [Full text]

  • Michaelov, J. A. & Bergen, B. K. (2023). ‘Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers’. Findings of the Association for Computational Linguistics: ACL 2023. [arXiv]

  • Rezaii, N., Michaelov, J., Josephy‐Hernandez, S., Ren, B., Hochberg, D., Quimby, M., & Dickerson, B. C. (2023). ‘Measuring Sentence Information via Surprisal: Theoretical and Clinical Implications in Nonfluent Aphasia’. Annals of Neurology.

  • Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). `Do Large Language Models know what humans know?’. Cognitive Science, 47(7). [Full text]

  • Michaelov, J. A., Bardolph, M. D., Van Petten, C. K., Bergen, B. K., & Coulson, S. (2023). ‘Strong Prediction: Language Model Surprisal Explains Multiple N400 Effects’. Neurobiology of Language. [Open-Access Link]

  • Michaelov, J. A. & Bergen, B. K. (2022). ‘Collateral facilitation in humans and language models’. Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL 2022). [Full text]

  • Michaelov, J. A. & Bergen, B. K. (2022). ‘Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?’. The 29th International Conference on Computational Linguistics (COLING 2022). [Full text]

  • Michaelov, J. A., Coulson, S., & Bergen, B. K. (2022). ‘So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements’. IEEE Transactions on Cognitive and Developmental Systems. [arXiv]

  • Michaelov, J. A., & Bergen, B. K. (2020). ‘How well does surprisal explain N400 amplitude under different experimental conditions?’. Proceedings of the 24th Conference on Computational Natural Language Learning (CoNLL2020). (Nominated for best paper) [Full text]

  • Michaelov, J. 2017. ‘The Young and the Old: (T) Release in Elderspeak’. Lifespans and Styles 3 (1): 2–9. [Full Text]

Non-archival conference papers

  • Michaelov, J. A., Coulson, S. & Bergen, B. K. (2023). ‘Can Peanuts Fall in Love with Distributional Semantics?’. Proceedings of the Annual Meeting of the Cognitive Science Society, 45. Sydney, Australia. [arXiv preprint]

  • Michaelov, J. A. & Bergen, B. K. (2022). ‘The more human-like the language model, the more surprisal is the best predictor of N400 amplitude’. The NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems (InfoCog). New Orleans, USA.

  • Jones, C. R., Chang, T. A., Coulson, S., Michaelov, J. A., Trott, S., & Bergen, B. (2022). ‘Distrubutional Semantics Still Can’t Account for Affordances’. In Proceedings of the Annual Meeting of the Cognitive Science Society, 44. Toronto, Canada.

  • Michaelov, J. A., Bardolph, M. D., Coulson, S., & Bergen, B. K. (2021). ‘Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?’. In Proceedings of the Annual Meeting of the Cognitive Science Society, 43. University of Vienna, Vienna, Austria (Conference held online). [Preprint]

Presentations and posters

  • Arnett, C., Chang, T. A., Michaelov, J. A., & Bergen, B. K. (2023). ‘Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models’. Presented at The 3rd Multilingual Representation Learning Workshop (MRL 2023). [arXiv]

  • Michaelov, J. A., Coulson, S., & Bergen, B. K. (2022). ‘Do we need situation models? Distributional semantics can explain how peanuts fall in love’. Presented at The 35th Annual Conference on Human Sentence Processing (HSP 2022). University of California Santa Cruz, Santa Cruz, USA (Conference held online).

  • Michaelov, J. A., Coulson, S., & Bergen, B. K. (2022). ‘Cloze behind: Language model surprisal predicts N400 amplitude better than cloze’. Presented at The 35th Annual Conference on Human Sentence Processing (HSP 2022). University of California Santa Cruz, Santa Cruz, USA (Conference held online). [Poster]

  • Michaelov, J. A., Bardolph, M. D., Coulson, S., & Bergen, B. K. (2021). ‘Is the relationship betweenword probability and processing difficulty linear or logarithmic?’. Presented at The 34th CUNY Conference on Human Sentence Processing (CUNY 2021). University of Pennsylvania, Philadelphia, USA (Conference held online).

  • Michaelov, J. A., Bardolph, M. D., Coulson, S., & Bergen, B. K. (2020). ‘Surprisal is a good predict or of the N400 effect, but not for semantic relations’. Presented at The 26th Architectures and Mechanisms for Language Processing Conference (AMLaP 2020). Presentation in Special Session: Computational models of language processing. University of Potsdam, Potsdam, Germany (Conference held online). [Abstract]

  • Michaelov, J., Culbertson, J., & Rohde, H. (2017). ‘How universal are prominence hierarchies? Evidence from native English speakers’. Poster presented at The 23rd Architectures and Mechanisms for Language Processing Conference (AMLaP 2017). Lancaster, UK. [Abstract] [Poster]

  • Michaelov, J. 2017. ‘The Young and the Old: (T) Release in Elderspeak’. presented at the Undergraduate Linguistics Association of Britain 2017 Conference (ULAB 2017), The University of Cambridge, Cambridge, U.K., September 4. [Abstract]

 


 

Media Interviews

Sandrine Ceurstemont (2023). ‘Bigger, Not Necessarily Better’. Communications of the ACM. (Interviewed and quoted in article).

 


 

Employment

Amazon

  • Applied Scientist Intern at Alexa Games (Summer 2023)

 


 

Teaching Experience

The University of California San Diego

  • TA: Data Science in Practice (Winter 2024)
  • TA: Introduction to Data Science (Fall 2023)
  • TA: Learning, Memory, and Attention (Spring 2023)
  • TA: Neurobiology of Cognition (Winter 2023)
  • TA: Cognitive Consequences of Technology (Fall 2022)
  • TA: Cognitive Perspectives (Summer 2022)
  • TA: What the *#!?: An Uncensored Introduction to Language (Fall 2021)
  • TA: Cognitive Neuroeconomics (Fall 2020)
  • TA: Cognitive Neuroeconomics (Summer 2020)
  • TA: Language Comprehension (Summer 2020)
  • TA: Cognitive Neuroeconomics (Winter 2020)
  • TA: What the *#!?: An Uncensored Introduction to Language (Fall 2019)
  • TA: Minds and Brains (Spring 2019)

The University of Edinburgh

  • Tutor: Logic 1 (2018)
  • Tutor: Informatics 1: Computation and Logic (2017)

 


 

Other Projects