Byung-Doh Oh (오병도)

Center for Data Science, NYU
oh.b@nyu.edu
Curriculum Vitae

oh_photo.jpg

I am a Faculty Fellow at the Center for Data Science at New York University, where I collaborate with Tal Linzen. I received my PhD in computational linguistics from The Ohio State University, where I worked with William Schuler.

My work aims to advance our understanding of language processing in humans and machines by drawing on techniques from psycholinguistics and machine learning. I am particularly interested in developing computational models that capture the real-time processing behavior of human language users, and interpretability techniques for studying the predictions and representations of neural networks.

representative publications

  1. JML
    Dissociable frequency effects attenuate as large language model surprisal predictors improve
    Byung-Doh Oh, and William Schuler
    Journal of Memory and Language, 2025
  2. EACL
    Frequency explains the inverse correlation of large language models’ size, training data amount, and surprisal’s fit to reading times
    Byung-Doh Oh, Shisen Yue, and William Schuler
    In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 2024
  3. ACL
    Token-wise decomposition of autoregressive language model hidden states for analyzing model predictions
    Byung-Doh Oh, and William Schuler
    In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 2023
  4. TACL
    Why does surprisal from larger Transformer-based language models provide a poorer fit to human reading times?
    Byung-Doh Oh, and William Schuler
    Transactions of the Association for Computational Linguistics, 2023
  5. EMNLP
    Entropy- and distance-based predictors from GPT-2 attention patterns predict reading times over and above GPT-2 surprisal
    Byung-Doh Oh, and William Schuler
    In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022
  6. FAI
    Comparison of structural parsers and neural language models as surprisal estimators
    Byung-Doh Oh, Christian Clark, and William Schuler
    Frontiers in Artificial Intelligence, 2022