Hailin Hao


Hi and thanks for stopping by! I am a 3rd year PhD student at the Department of Linguistics at the University of Southern California. I received my BA in Linguistics from the University of Amsterdam in 2020. My CV can be accessed here (updated April 2024).Using tools from psycholinguistic experimentation, computational modeling, and corpus analysis, I study language processing and use in humans and language models. Some topics I've been working on include:

  • Similarity-based interference

  • Local coherence

  • Locality vs. anti-locality effects

  • Computation of non-literal interpretations

  • Individual differences in predictive processing and pragmatic reasoning

  • Evaluations of formal and functional linguistic competence in language models

I am mainly advised by Elsi Kaiser at USC Linguistics, but I also actively collaborate with people both within and outside the department, including (in alphabetic order):

Before starting graduate school, I worked a lot on prosody and its interface with syntax, semantics, and processing. I have one manuscript in preparation with Lisa Cheng and Leticia Pablos Robles at Leiden University on the role of prosody in anticipating clause types in Mandarin Chinese. My undergraduate thesis, under the supervision of Jeannette Schaeffer, and Marijn Van 't Veer, investigates the prosody of Contrastive Topic and Contrastive Focus in Mandarin Chinese using a production task.

Peer-reviewed papersHao, H., Schaeffer, J., & van t' Veer, M. (to appear). On the prosody of Contrastive Topic and Contrastive Focus in Mandarin Chinese. Proceedings of the 40th West Coast Conference on Formal Linguistics (WCCFL). Somerville MA: Cascadilla Proceedings Project. [PDF]Hao, H., Hahn, M., & Kaiser, E. (2023). How do syntactic statistics and semantic plausibility modulate local coherence effects? In Proceedings of the 45th Annual Meeting of the Cognitive Science Society. [PDF]Hao, H. (2023). Evaluating Transformers’ sensitivity to syntactic embedding depth. In: Mehmood, R., et al. Distributed Computing and Artificial Intelligence, Special Sessions I, 20th International Conference. DCAI 2023. Lecture Notes in Networks and Systems, vol 741. Springer, Cham. [PDF]Hao, H., & He, M. (2023). Can Large Language Model Surprisal capture the informativity bias in human language processing? In: Mehmood, R., et al. Distributed Computing and Artificial Intelligence, Special Sessions I, 20th International Conference. DCAI 2023. Lecture Notes in Networks and Systems, vol 741. Springer, Cham. [PDF]Conference presentations2024. Hao, H., Himanshu, Y., & Kaiser, E. High Expectations Enhance Locality Effects: Evidence from Naturalistic Reading Time Corpora. Talk at the 6th California Meeting on Psycholinguistics (CAMP6). Stanford, CA, USA.2024. Li, J., Hao, H., & Futrell, R. Language models can adapt better to within-clause than across-clause exchange errors. Poster at the 6th California Meeting on Psycholinguistics (CAMP6). Stanford, CA, USA.2024. Hao, H.*, He, M.* & Fuchs Z. Are Informativity-Based Linguistic Predictions Driven by Gender-Stereotypical Knowledge? Poster at the 6th California Meeting on Psycholinguistics (CAMP6). Stanford, CA, USA.2024. Hao, H., Fuchs, Z., & Vasishth, S. Similarity-Based Interferences in Chinese Classifier-Noun Dependencies. Poster at the 6th California Meeting on Psycholinguistics (CAMP6). Stanford, CA, USA.2023. Hao, H., Fuchs, Z., & Vasishth, S. Similarity-Based Interferences in Chinese Classifier-Noun Dependencies. Talk at the 6th Crosslinguistic Perspectives on Processing and Learning Workshop (X-PPL 2023), Zurich, Switzerland.2023. Hao, H., Fuchs, Z., & Vasishth, S. Similarity-Based Interferences in Chinese Classifier-Noun Dependencies. Talk at the 2nd Architectures and Mechanisms of Language Processing- Asia (AMLaP-Asia), Hong Kong.2023. Hao, H., & He, M. Can large language model surprisal capture the informativity bias in human language processing? Poster at the 28th Annual Conference on Architectures and Mechanisms for Language Processing, San Sebastián, Spain.2023. Hao, H., Hahn, M., & Kaiser, E. How Do Syntactic Statistics and Semantic Plausibility Modulate Local Coherence Effects. Talk at the 45th Annual Meeting of the Cognitive Science Society (CogSci2023), Sydney, Australia.2023. Hao, H. Evaluating Transformers’ Sensitivity to Syntactic Embedding Depth. Talk at the 20th International Conference on Distributed Computing and Artificial Intelligence (DCAI2023), Guimarães, Portugal.2023. Hao, H., & He, M. Can large language model surprisal capture the informativity bias in human language processing? Talk at the 20th International Conference on Distributed Computing and Artificial Intelligence (DCAI2023), Guimarães, Portugal.2023. Hao, H. Evaluating Transformers' sensitivity to syntactic embedding depth. Poster at the 36th Annual Conference on Human Sentence Processing (HSP2023), Pittsburg, PA, USA.2023. Hao, H., Elsi, K., & Hahn, M. Effects of plausibility on agreement attraction. Poster at the 36th Annual Conference on Human Sentence Processing (HSP 2023), Pittsburg, PA, USA.

Grace Ford Salvatori 301, 3601 Watt Way, Los Angeles, CA 90089-1693
hailinhao061 [at] gmail [dot] com
hailinha [at] usc [dot] edu