Survey on Text Summarization Framework Using Machine Learning

Authors

  • Mrs. Kirti D Kulkarni  Department of Computer Engineering, Zeal College of Engineering and Research Pune, Maharashtra, India
  • Prof. Jareena N. Shaikh  Department of Computer Engineering, Zeal College of Engineering and Research Pune, Maharashtra, India

Keywords:

word vectors, word analogies, fast text, Integer linear programming, text summarising, natural language processing.

Abstract

An important natural language processing application, automatic text summarizing aims to condense a given textual content into a shorter model by using machine learning techniques. As media content transmission over the Internet continues to rise at an exponential rate, text summarization utilizing neural networks from asynchronous combinations of text is becoming increasingly necessary. Using the principles of natural language processing (NLP), this research proposes a framework for examining the intricate information included in multi-modal statistics and for improving the features of text summarization that are currently available. The underlying principle is to fill in the semantic gaps that exist between different types of content. In the following step, the summary for relevant information is generated using multi-modal topic modelling. Finally, all of the multi-modal aspects are taken into account in order to provide a textual summary that maximizes the relevance, non-redundancy, believability, and scope of the information by allocating an accumulation of submodular features.

References

  1. MYEONGJUN JANG   AND PILSUNG KANG, “Learning-Free Unsupervised Extractive Summeriza-tion Model ,” 2021, arXiv:9321308 [Online].Available: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=arnumber=9321308
  2. A. See, P. J. Liu, and C. D. Manning, “Get to the point: Sum-marization with pointer-generator networks,” 2017, arXiv:1704.04368. [Online].Available: http://arxiv.org/abs/1704.04368.
  3. Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, “Neural document summarization by jointly learning to score and select sen-tences,” 2018, arXiv:1807.02305. [Online]. Available: http://arxiv.org/ abs/1807.02305.
  4. X. Zhang, M. Lapata, F. Wei, and M. Zhou, “Neural latent extractive document summarization,” 2018, arXiv:1808.07187. [Online]. Avail-able:http://arxiv.org/abs/1808.07187.
  5. C. Kedzie, K. McKeown, and H. Daume, “Content selection in deep learning models of summarization,” 2018, arXiv:1810.12343. [Online].
  6. L. Liu, Y. Lu, M. Yang, Q. Qu, J. Zhu, and H. Li, “Generative adversarial network for abstractive text summarization,” in Proc. 32nd AAAI Conf. Artif. Intell., 2018, pp. 1–2.
  7. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, ‘‘BERT: Pre-training of deep bidirectional transformers for language understanding,’’ 2018,arXiv:1810.04805. [Online]. Available: http://arxiv.org/abs/1810.04805.
  8. T. Boongoen and N. Iam-On, ‘‘Cluster ensembles: A survey of approaches with recent extensions and applications,’’ Comput. Sci. Rev., vol. 28, pp. 1–25, May 2018.
  9. M. Koupaee and W. Y. Wang, ‘‘WikiHow: A large scale text summarization dataset,’’ 2018, arXiv:1810.09305. [Online].
  10. E. Grave, P. Bojanowski, P. Gupta, A. Joulin, and T. Mikolov, ‘‘Learning word vectors for 157 languages,’’ in Proc. Int. Conf. Lang. Resour. Eval. (LREC), 2018, pp. 1–3.
  11. Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le,‘‘XlNet: Generalized autoregressive pretraining for language understand ing,’’ in Proc. Adv. Neural Inf. Process. Syst., 2019, pp. 5753–5763.
  12. C. Kedzie, K. McKeown, and H. Daume, ‘‘Content selection in deep learning models of summarization,’’ 2018, arXiv:1810.12343. [Online]. Available: http://arxiv.org/abs/1810.12343
  13. Z. Li, Z. Peng, S. Tang, C. Zhang, and H. Ma, ‘‘Text summarization method based on double attention pointer network,’’ IEEE Access, vol. 8, pp. 11279–11288, 2020
  14. J. Tan, X. Wan, and J. Xiao, ‘‘Abstractive document summarization with a graph-based attentional neural model,’’ in Proc. 55th Annu. Meeting Assoc. Comput. Linguistics, vol. 1, Jul. 2017, pp. 1171– 1181.
  15. H. Kim and S. Lee, ‘‘A context based coverage model for abstractive document summarization,’’ in Proc. Int. Conf. Inf. Commun. Technol. Converg. (ICTC), Oct. 2019, pp. 1129–1132.
  16. L. Liu, Y. Lu, M. Yang, Q. Qu, J. Zhu, and H. Li, ‘‘Generative adversarial network for abstractive text summarization,’’ in Proc. 32nd AAAI Conf. Artif. Intell., 2018, pp. 1–2.

Downloads

Published

2022-05-07

Issue

Section

Research Articles

How to Cite

[1]
Mrs. Kirti D Kulkarni, Prof. Jareena N. Shaikh, " Survey on Text Summarization Framework Using Machine Learning, International Journal of Scientific Research in Science, Engineering and Technology(IJSRSET), Print ISSN : 2395-1990, Online ISSN : 2394-4099, Volume 9, Issue 3, pp.581-585, May-June-2022.