- Seminar Calendar
- Seminar Archive
- 2024-2025 Semester 1
- 2023-2024 Semester 2
- 2023-2024 Semester 1
- 2022-2023 Semester 2
- 2022-2023 Semester 1
- 2021-2022 Semester 2
- 2021-2022 Semester 1
- 2020-2021 Semester 2
- 2020-2021 Semester 1
- 2019-2020 Semester 2
- 2019-2020 Semester 1
- 2018-2019 Semester 2
- 2018-2019 Semester 1
- 2017-2018 Semester 2
- 2017-2018 Semester 1
- 2016-2017 Semester 2
- 2016-2017 Semester 1
- 2015-2016 Semester 1
- 2015-2016 Semester 2
- 2014-2015 Semester 2
- 2014-2015 Semester 1
- 2013-2014 Semester 2
- 2013-2014 Semester 1
- 2012-2013 Semester 2
- 2012-2013 Semester 1
- 2011-2012 Semester 2
- 2011-2012 Semester 1
- 2010-2011 Semester 2
- 2010-2011 Semester 1
- 2009-2010 Semester 2
- 2009-2010 Semester 1
- 2008-2009 Semester 2
- 2008-2009 Semester 1
- 2007-2008 Semester 2
- 2007-2008 Semester 1
- 2006-2007 Semester 2
- 2006-2007 Semester 1
- 2005-2006 Semester 2
- 2005-2006 Semester 1
- Contact
- Site Map
Improving Recurrent Neural Network Language Models For Human Language Big Data
----------------------------------------------------------------------------------------------------------------------------
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
----------------------------------------------------------------------------------------------------------------------------
Date: Thursday, 11:00am - 12:30pm, September 24, 2015
Title: Improving Recurrent Neural Network Language Models For Human Language Big Data
Speaker: Dr. LIU Xunying, Cambridge University
Abstract:
Human languages present natural forms of big data. They are highly complex and dynamic in nature, variable in form, and produced in massive volume everyday with vital economic and social value. Recurrent neural network language models (RNNLMs) and their variants recently are becoming increasingly popular for a range of human language technology applications due to their inherently strong sequence modelling ability to capture long distance contextual dependencies. These include speech recognition, machine translation, spoken language understanding and dialogue systems. However, several major challenges remain when modelling human language big data using RNNLMs. First, the computational cost incurred in training and evaluation significantly limits their scalability and the number of possible application areas. Second, the lack of efficient decoding techniques with a compact search space representation presents severe difficulty when using RNNLMs in practical systems. Finally, the lack of a structured representation of factors influencing speech and language further limits their ability to generalize more widely and dynamically adapt to factors evolving over time. In order to address these issues, this talk presents a series of recent research aiming to significantly improve the training, decoding efficiency and basic modelling strategy of conventional RNNLMs for human language big data. A noise contrastive estimation (NCE) based RNNLM training criterion combined with an efficient GPU based bunch mode RNNLM training algorithm obtained over 50 times training and evaluation time speed up over the publicly available RNNLM toolkit. Two history clustering schemes provided highly accurate and efficient RNNLM decoding approaches, with over 70% more compact decoding network size than tree structured 10k-best lists with comparable performance. A novel RNNLM training criterion and modelling architecture were further used to automatically generate and model paraphrases and topic variation. The performance improvements from RNNLMs over baseline n-gram LMs were increased by a factor of 2.5. Experimental results are presented for multiple large scale speech recognition tasks.
Biography:
Dr. Xunying Liu received a Ph.D. degree in speech recognition in 2006 and MPhil degree in computer speech and language processing in 2001 both from University of Cambridge, prior to a bachelor degree from Shanghai Jiao Tong University in 2000. He is currently a Senior Research Associate at the Machine Intelligence Laboratory of the Cambridge University Engineering Department. He also holds a visiting Research Professorship at Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. He has published over 50 peer-reviewed papers in leading international journals and conferences as well as two book chapters covering a wide range of research topics in large scale speech and language processing systems including speech recognition, language modelling, machine translation, natural language processing, speech synthesis, speech production and language learning. He was the recipient of best paper award at ISCA Interspeech2010 for his paper on "Language Model Cross Adaptation For LVCSR System Combination". His research on efficient decoding techniques using recurrent neural network language models published at IEEE ICASSP2014 also won a nomination for award. He is a co-author of the HTK toolkit and has continued to contribute to its current development. His research outputs led to several large scale speech recognition systems that were top ranked in a series of international research evaluations. These include the Cambridge Mandarin Chinese broadcast recognition system developed for the 2011 US DARPA sponsored GALE speech translation evaluation, and the Cambridge 2015 multi-genre broadcast speech transcription system. Dr. Xunying Liu is a member of IEEE and ISCA.
Everyone is welcome to attend the talk!
Venue: Room 513,
William M.W. Mong Engineering Building (ERB),
(Engineering Building Complex Phase 2)
The Chinese University of Hong Kong.
The talk will be hosted by:
Prof. Helen Meng,
Department of Systems Engineering and Engineering Management,
The Chinese University of Hong Kong,
E-mail: hmmeng@se.cuhk.edu.hk
Homepage: http://www.se.cuhk.edu.hk/hmmeng_web/
SEEM-5201 Website: http://seminar.se.cuhk.edu.hk
Email: seem5201@se.cuhk.edu.hk
Date:
Thursday, September 24, 2015 - 03:00 to 04:30