
Hailin Hao
News:Nov/Dec 2023: I will present my work on similarity-based interferences in Chinese classifier-noun dependencies at X-PPL 2023 in Zurich and AMLaP Asia 2023 in Hong Kong.Aug 2023: My co-author Muxuan He will present a joint poster at AMLaP 2023 in San Sebastian.July 2023: I received a student travel award to attend CogSci 2023 in Sydney, where I will present one talk.July 2023: Two full papers accepted for DCAI 2023May-July 2023: I will visit Shravan Vasishth's lab at the University of Potsdam, supported by an SFB fellowship
Hi and thanks for stopping by! I am a 2nd year PhD student at the Department of Linguistics at the University of Southern California. I received my BA in Linguistics from the University of Amsterdam in 2020. My CV can be accessed here.Using tools from psycholinguistic experimentation, computational modeling, and corpus analysis, I study language processing and use in humans and machines (i.e., neural networks). Some topics I've been working on include:
Similarity-based interference
Local coherence
Locality vs. anti-locality effects
Computation of non-literal interpretations
Individual differences in predictive processing and pragmatic reasoning
Evaluations of formal and functional linguistic competence in language models
I am mainly advised by Elsi Kaiser at USC Linguistics, but I also actively collaborate with people both within and outside the department, including (in alphabetic order):
Zuzanna Fuchs (USC)
Michael Hahn (Saarland)
Muxuan He (USC)
Shravan Vasishth (Potsdam)
Himanshu Yadav (Potsdam)
Yang Yang (GDUFS)
Before starting graduate school, I worked a lot on prosody and its interface with syntax, semantics, and processing. I have one manuscript in preparation with Lisa Cheng and Leticia Pablos Robles at Leiden University on the role of prosody in anticipating clause types in Mandarin Chinese. My undergraduate thesis, under the supervision of Jeannette Schaeffer, and Marijn Van 't Veer, investigates the prosody of Contrastive Topic and Contrastive Focus in Mandarin Chinese using a production task.
Peer-reviewed papersHao, H., Hahn, M., & Kaiser, E. (2023). How Do Syntactic Statistics and Semantic Plausibility Modulate Local Coherence Effects. In Proceedings of the 45th Annual Meeting of the Cognitive Science Society. [PDF]Hao, H. (2023). Evaluating Transformers’ Sensitivity to Syntactic Embedding Depth. In: Mehmood, R., et al. Distributed Computing and Artificial Intelligence, Special Sessions I, 20th International Conference. DCAI 2023. Lecture Notes in Networks and Systems, vol 741. Springer, Cham. [PDF]Hao, H., & He, M. (2023). Can Large Language Model Surprisal Capture the Informativity Bias in Human Language Processing? In: Mehmood, R., et al. Distributed Computing and Artificial Intelligence, Special Sessions I, 20th International Conference. DCAI 2023. Lecture Notes in Networks and Systems, vol 741. Springer, Cham. [PDF]Hao, H., Schaeffer, J., & van t' Veer, M. (2022). On the prosody of Contrastive Topic and Contrastive Focus in Mandarin Chinese. Proceedings of the 40th West Coast Conference on Formal Linguistics (WCCFL). Somerville MA: Cascadilla Proceedings Project. [PDF]Conference presentationsupcoming. Hao, H., Fuchs, Z., & Vasishth, S. Similarity-Based Interferences in Chinese Classifier-Noun Dependencies. Talk at the 6th Crosslinguistic Perspectives on Processing and Learning Workshop (X-PPL 2023), Zurich, Switzerland.upcoming. Hao, H., Fuchs, Z., & Vasishth, S. Similarity-Based Interferences in Chinese Classifier-Noun Dependencies. Talk at the 2nd Architectures and Mechanisms of Language Processing- Asia (AMLaP-Asia), Hong Kong.2023. Hao, H., & He, M. Can large language model surprisal capture the informativity bias in human language processing? Poster at the 28th Annual Conference on Architectures and Mechanisms for Language Processing, San Sebastián, Spain.2023. Hao, H., Hahn, M., & Kaiser, E. How Do Syntactic Statistics and Semantic Plausibility Modulate Local Coherence Effects. Talk at the 45th Annual Meeting of the Cognitive Science Society (CogSci2023), Sydney, Australia.2023. Hao, H. Evaluating Transformers’ Sensitivity to Syntactic Embedding Depth. Talk at the 20th International Conference on Distributed Computing and Artificial Intelligence (DCAI2023), Guimarães, Portugal.2023. Hao, H., & He, M. Can large language model surprisal capture the informativity bias in human language processing? Talk at the 20th International Conference on Distributed Computing and Artificial Intelligence (DCAI2023), Guimarães, Portugal.2023. Hao, H. Evaluating Transformers' sensitivity to syntactic embedding depth. Poster at the 36th Annual Conference on Human Sentence Processing (HSP2023), Pittsburg.2023. Hao, H., Elsi, K., & Hahn, M. Effects of plausibility on agreement attraction. Poster at the 36th Annual Conference on Human Sentence Processing (HSP 2023), Pittsburg.
Grace Ford Salvatori 301, 3601 Watt Way, Los Angeles, CA 90089-1693
hailinhao061 [at] gmail [dot] com
hailinha [at] usc [dot] edu