Refined Gate: A Simple and Effective Gating Mechanism for Recurrent Units


Abstract

Recurrent neural network (RNN) has been widely studied in sequence learning tasks, while the mainstream models (e.g., LSTM and GRU) rely on the gating mechanism (in control of how information flows between hidden states). However, the vanilla gates in RNN (e.g., the input gate in LSTM) suffer from the problem of gate undertraining, which can be caused by various factors, such as the saturating activation functions, the gate layouts (e.g., the gate number and gating functions), or even the suboptimal memory state etc.. Those may result in failures of learning gating switch roles and thus the weak performance. In this paper, we propose a new gating mechanism within general gated recurrent neural networks to handle this issue. Specifically, the proposed gates directly short connect the extracted input features to the outputs of vanilla gates, denoted as refined gates. The refining mechanism allows enhancing gradient back-propagation as well as extending the gating activation scope, which can guide RNN to reach possibly deeper minima. We verify the proposed gating mechanism on three popular types of gated RNNs including LSTM, GRU and MGU. Extensive experiments on 3 synthetic tasks, 3 language modeling tasks and 5 scene text recognition benchmarks demonstrate the effectiveness of our method. [Paper]

Highlights Contributions

❃ We provide a deeper understanding of gating mechanism in GRNNs and focus on the widely existing chanllenging problem: gate undertraining.

❃ We propose a new gating mechanism enhancing the vanilla gates using simple yet effective refining operations, which is verified to be well adapted in existing GRNN units like LSTM, GRU and MGU.

❃ We show intuitive evaluation on gate controlling ability through well-designed sequential tasks, i.e., adding and counting, and offer reasonable illustrations both qualitatively and quantitatively

❃ Experiments on various tasks, including 3 synthetic datasets and multiple real-world datasets (3 language modeling tasks and 5 scene text recognition benchmarks) demonstrate that the proposed gate refinement mechanism can effectively boost GRNN learning.


Recommended Citations

If you find our work is helpful to your research, please feel free to cite us:
@article{cheng2020rg, 
    title={Object-QA: Towards High Reliable Object Quality Assessment}, 
    author={Cheng, Zhanzhan and Xu, Yunlu and Cheng, Mingjian and Qiao, Yu and Pu, Shiliang and Niu, Yi and Wu, Fei}, 
    journal={arXiv preprint arXiv:2002.11338},
    year={2020}, 
}