Chunxi Liu

I am a Ph.D. student in the Department of Electrical and Computer Engineering, starting from Fall 2012. My advisor is Prof. Sanjeev Khudanpur. I also work closely with Aren Jansen. My research interests are spoken keyword search and automatic speech recognition (ASR).

I was an undergraduate in the School of Electronic, Electrical and Computer Engineering at the University of Birmingham, UK from 2010 to 2012. My final year project supervisor was Prof. Martin Russell. Before that, I was an undergraduate in Electronic Information Engineering at the Harbin Institute of Technology from 2008 to 2010.

You can reach me at chunhsiliu at gmail dot com.



  • Chunxi Liu*, Preethi Jyothi*, Hao Tang, Vimal Manohar, Rose Sloan, Tyler Kekona, Mark Hasegawa-Johnson, and Sanjeev Khudanpur, “Adapting ASR for Under-Resourced Languages Using Mismatched Transcriptions,” submitted to Proceedings of ICASSP, 2016. (Joint first-author)
  • Chunxi Liu, Aren Jansen, and Sanjeev Khudanpur, “Context-Dependent Point Process Models for Keyword Search and Detection-Based ASR,” submitted to Proceedings of ICASSP, 2016.
  • Chunxi Liu, Puyang Xu, and Ruhi Sarikaya, “Deep Contextual Language Understanding in Spoken Dialogue Systems,” in Proceedings of Interspeech, 2015.
  • Jan Trmal, Guoguo Chen, Dan Povey, Sanjeev Khudanpur, Pegah Ghahremani, Xiaohui Zhang, Vimal Manohar, Chunxi Liu, Aren Jansen, Dietrich Klakow, David Yarowsky, Florian Metze, “A Keyword Search System Using Open Source Software,” in IEEE Spoken Language Technology Workshop (SLT), 2014.
  • Chunxi Liu, Aren Jansen, Guoguo Chen, Keith Kintzley, Jan Trmal, and Sanjeev Khudanpur, “Low-resource Open Vocabulary Keyword Search Using Point Process Models,” in Proceedings of Interspeech, 2014.


 Research Experience:


  • Visiting student at the University of Washington, WA USA, Summer 2015.

Joined the 2015 Jelinek Summer Workshop on Speech and Language Technology, working in the group “Probabilistic Transcription of Languages with No Native-Language Transcribers” led by Prof. Mark Hasegawa-Johnson. I developed a Kaldi based recipe of Maximum a Posteriori (MAP) adaptation for multilingual GMM-HMM acoustic models using probabilistic transcriptions.

Worked in the Language Understanding and Dialog Systems Group, with Puyang Xu and Ruhi Sarikaya. Developed a unified modeling framework for multi-turn multi-task spoken language understanding using recurrent convolutional neural networks.

Working on spoken keyword search and speech recognition, advised by Aren Jansen and Sanjeev Khudanpur.


Last update on September 30, 2015

Center for Language and Speech Processing