OVERVIEW OF THE DIFFERENT TEXT SUMMARIZATION METHODS
Abstract
Text summarization is one of the major problems because it has a high range of usage in various fields, it is most important to have an improved mechanism for the fastest and most effective extraction of the information. The extraction of the summary from all that available source of text data by hand is very difficult. In order to show the ways for solving the text summarization, this paper presents a brief survey of various text summarization methods like MatchSum (Zhong et al., 2020), BertSumExt (Liu and Lapata 2019) and SemSim (Yoon et al., 2020) which has shown the leading results in extractive and abstractive text summarization. This paper reviews those models and shows their advantages and disadvantages, makes a guess how text summarization can be improved.
About the Authors
D. DauitKazakhstan
M. Kemalov
Kazakhstan
A. Jaksylykova
Kazakhstan
References
1. Lewis M. et al. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension //arXiv preprint arXiv: 1910.13461. – 2019.
2. Hermann K. M. et al. Teaching Machines to Read and Comprehend. arXiv. – 2015.
3. Zhong M. et al. Extractive Summarization as Text Matching //arXiv preprint arXiv:2004.08795. – 2020.
4. Liu Y., Lapata M. Text summarization with pretrained encoders //arXiv preprint arXiv:1908.08345. – 2019.
5. Yoon, Wonjin, et al. Learning by Semantic Similarity Makes Abstractive Summarization Better. 2020, http://arxiv.org/abs/2002.07767.
6. NLP progress, summarization. URL: http://nlpprogress.com/english/summarization.html
Review
For citations:
Dauit D., Kemalov M., Jaksylykova A. OVERVIEW OF THE DIFFERENT TEXT SUMMARIZATION METHODS. Herald of the Kazakh-British technical university. 2020;17(2):163-168.