Comprehensive Review on Effectual Information Retrieval of Semantic Drift using Deep Neural Network
Author : A. Uma Maheswari and N. RevathyVolume 8 No.1 January-March 2019 pp 32-35
Abstract
Semantic drift is a common problem in iterative information extraction. Unsupervised bagging and incorporated distributional similarity is used to reduce the difficulty of semantic drift in iterative bootstrapping algorithms, particularly when extracting large semantic lexicons. In this research work, a method to minimize semantic drift by identifying the (Drifting Points) DPs and removing the effect introduced by the DPs is proposed. Previous methods for identifying drifting errors can be roughly divided into two categories: (1) multi-class based, and (2) single-class based, according to the settings of Information Extraction systems that adopt them. Compared to previous approaches which usually incur substantial loss in recall, DP-based cleaning method can effectively clean a large proportion of semantic drift errors while keeping a high recall.
Keywords
Semantic Drift, Drifting Points, Deep Neural Network, Information Retrieval
References
[1] [Online] Available: www.wikipedia.com.
[2] Maaz Amajd, Zhanibek Kaimuldenov and Ilia Voronkov, “Text Classification with Deep Neural Networks”.
[3] Yunfeng Zhu, Fernando De la Torre, Jeffrey F. Cohn, “Dynamic Cascades with BidirectionalBootstrapping for Action Unit Detection in Spontaneous Facial Behavior”, IEEE, April-June 2011.
[4] SomayehKazemi, AyazGhorbani, HamidrezaAmindavar, and Dennis R. Morgan, “Vital-Sign Extraction Using Bootstrap-Based Generalized Warblet Transform in Heart and Respiration Monitoring Radar System”, IEEE, Feb. 2016
[5] Wing W. Y. Ng, Senior Member, IEEE, Xing Tian, YuemingLv, Daniel S. Yeung, “Incremental Hashing for Semantic Image Retrieval in Non-stationary Environments”, IEEE, Nov. 2017
[6] Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q. Zhu, “Semantic Bootstrapping: A Theoretical Perspective”, IEEE, Feb. 2017.
[7] Cheng-Tao Chung, Cheng-Yu Tsai, Chia-Hsiang Liu, and Lin-Shan Lee, “Unsupervised Iterative Deep Learning of Speech Features and Acoustic Tokens with Applications to Spoken Term Detection”, IEEE, Oct. 2017.
[8] Peipei Li, Lu He, Haiyan Wang, Xuegang Hu, Yuhong Zhang and Lei Li,andXindong Wu, Learning From Short Text Streams with Topic Drifts”, IEEE, September 2018.
[9] Zhixu Li, Ying He, BinbinGu, An Liu, Hongsong Li, Haixun Wang, and Xiaofang Zhou, “Diagnosing and Minimizing Semantic Drift in Iterative Bootstrapping Extraction”, IEEE, May 2018.
[10] WeichaoShen, Yuwei Wu*, Junsong Yuan, Lingyu Duan, Jian Zhang Senior and YundeJia, “Robust Distracter- Resistive Tracker via Learning a Multi-Component Discriminative Dictionary”, IEEE, 2018.
[11] Muhammad Zain Amin, Noman Nadeem, “Convolutional Neural Network: Text Classification Model for Open Domain Question Answering System”.
[12] Yu He, Jianxin Li, Yangqiu Song, Mutian He, HaoPeng, “Time-evolving Text Classification with Deep Neural Networks”, IJCAI, 2018.
[13] C. Du and L. Huang, “Text Classification Research with Attention-based Recurrent Neural Networks”, International Journal of Computers Communications & Control, February 2018
[14] Siwei Lai, LihengXu, Kang Liu and Jun Zhao, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
[15] Alexis Conneau, HolgerSchwenk, Yann Le Cun, “Very Deep Convolutional Networksfor Text Classification”, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Vol. 1, Long Papers, pp. 1107–1116, Valencia, Spain, April 3-7, 2017.
[16] BaoxinWang, “Disconnected Recurrent Neural Networks for Text Categorization”, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (Long Papers), pages 2311–2320 Melbourne, Australia, July 15 – 20, 2018.
[17] MaazAmajd, ZhanibekKaimuldenov and Ilia Voronkov, “Text Classification with Deep Neural Networks”, ceur-ws.org/Vol-1989, 2018.