- Seminar Calendar
- Seminar Archive
- 2024-2025 Semester 1
- 2023-2024 Semester 2
- 2023-2024 Semester 1
- 2022-2023 Semester 2
- 2022-2023 Semester 1
- 2021-2022 Semester 2
- 2021-2022 Semester 1
- 2020-2021 Semester 2
- 2020-2021 Semester 1
- 2019-2020 Semester 2
- 2019-2020 Semester 1
- 2018-2019 Semester 2
- 2018-2019 Semester 1
- 2017-2018 Semester 2
- 2017-2018 Semester 1
- 2016-2017 Semester 2
- 2016-2017 Semester 1
- 2015-2016 Semester 1
- 2015-2016 Semester 2
- 2014-2015 Semester 2
- 2014-2015 Semester 1
- 2013-2014 Semester 2
- 2013-2014 Semester 1
- 2012-2013 Semester 2
- 2012-2013 Semester 1
- 2011-2012 Semester 2
- 2011-2012 Semester 1
- 2010-2011 Semester 2
- 2010-2011 Semester 1
- 2009-2010 Semester 2
- 2009-2010 Semester 1
- 2008-2009 Semester 2
- 2008-2009 Semester 1
- 2007-2008 Semester 2
- 2007-2008 Semester 1
- 2006-2007 Semester 2
- 2006-2007 Semester 1
- 2005-2006 Semester 2
- 2005-2006 Semester 1
- Contact
- Site Map
Future Word Context in Neural Network Language Models
----------------------------------------------------------------------------------------------------------------------------
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
----------------------------------------------------------------------------------------------------------------------------
Date: Friday, 4:30pm - 5:30pm, December 15, 2017
Title: Future Word Context in Neural Network Language Models
Speaker: Dr. Chen Xie
Abstract: Language model is a crucial component in a range of applications including speech recognition. Traditional language models (LMs) are constructed by computing the probability of the current word based on its history words. In this form of unidirectional LMs (uni-LM), future information is ignored although that they contain useful information to improve word prediction. However, the use of future information in language modeling will cause several theoretical and practical issues. In this talk, I will introduce our recent work on the incorporation of future word contexts in neural network language models (NNLMs). First, the bidirectional recurrent neural network language models (bi-RNNLMs) are constructed and applied for speech recognition. However, bi-RNNLMs have some drawbacks as they require the complete sentence for probability computation. As a result, they are slow in training and difficult for lattice rescoring. A novel network structure is proposed to address these issues, which is called su-RNNLMs. Instead of using a recurrent unit to capture the complete future word contexts in bi-RNNLMs, a feedforward unit is used to model a fixed finite number of succeeding words. The proposed model can be trained much more efficiently than bi-RNNLMs and is suitable for lattice rescoring. Experimental results show that significant and consistent performance improvements can be obtained by incorporating future word contexts in language modeling. Furthermore, consistent performance improvement can be obtained from confusion network decoding on lattice generated by su-RNNLMs.
Biography: Xie (Jeff) Chen is a Research Associate of the speech group in the Machine Intelligence Laboratory. He obtained his PhD degree in the same group in Cambridge on April 2017. Before joining in Cambridge University Engineering Department in 2012, Jeff obtained Bachelor degree at Xiamen University (in Electronic Engineering in 2009 and finished M.Phil. study at Tsinghua University (in Electronic Engineering) in 2012.
Everyone is welcome to attend the talk!
Venue: Room 513,William M.W. Mong Engineering Building
(ERB),(Engineering Building Complex Phase 2) The Chinese University of
Hong Kong.
The talk will be hosted by:
Prof. LIU Xunying,
Department of Systems Engineering and Engineering Management,
The Chinese University of Hong Kong,
E-mail: xyliu@se.cuhk.edu.hk
Homepage: http://www1.se.cuhk.edu.hk/~xyliu/
SEEM-5201 Website: http://seminar.se.cuhk.edu.hk
Email: seem5201@se.cuhk.edu.hk
Date:
Friday, December 15, 2017 - 16:30 to 17:30