Zhixue Zhao, Associate Professor

Computer Science · University of Sheffield, United Kingdom — Assistant Professor

Research Areas

  • Cultural Analysis
  • Journalism
  • Media, Information and Communication Technology
  • Political Communication
  • Science Communication
  • Visual Communication

Highlighted publications

Jake Vasilakes, Zhixue Zhao, Michal Gregor, Ivan Vykopal, Martin Hyben, and Carolina Scarton. 2024. ExU: AI Models for Examining Multilingual Disinformation Narratives and Understanding their Spread. In Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2), pages 39–40, Sheffield, UK. European Association for Machine Translation (EAMT).

Zhixue Zhao and Nikolaos Aletras. 2024. Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3226–3244, Mexico City, Mexico. Association for Computational Linguistics.

Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, and Nikolaos Aletras. 2022. On the Impact of Temporal Concept Drift on Model Explanations. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4039–4054, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.

Zhixue Zhao and Nikolaos Aletras. 2023. Incorporating Attribution Importance for Improving Faithfulness Metrics. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4732–4745, Toronto, Canada. Association for Computational Linguistics.

About

I am Cass Zhixue Zhao, a lecturer in Natural Language Processing at the Computer Science Department of the University of Sheffield. My long-term research goal is to enable trustworthy, responsible, and efficient NLP models. These days, I am interested in anything related to interpretability and large language models (LLMs). My recent research projects focus on LLMs bias, interpretability, and multimodality.