Survey on Silentinterpreter :  Analysis of Lip Movement and  Extracting Speech using Deep Learning

Authors

  • Ameen Hafeez Department of Computer Science, Dayananda Sagar College of Engineering, Bengaluru, Karnataka, India Author
  • Rohith M K Department of Computer Science, Dayananda Sagar College of Engineering, Bengaluru, Karnataka, India Author
  • Sakshi Prashant Department of Computer Science, Dayananda Sagar College of Engineering, Bengaluru, Karnataka, India Author
  • Sinchana Hegde Department of Computer Science, Dayananda Sagar College of Engineering, Bengaluru, Karnataka, India Author
  • Prof. Shwetha K S Artificial Intelligence & Data Science Engineering, Zeal College of Engineering and Research, Pune, India Author

DOI:

https://doi.org/10.32628/IJSRSET2411219

Keywords:

Lip reading , Deep Learning, CNN, Speech recognition , Rnn, CTC Loss, Speech Recognition

Abstract

Lip reading is a complex but interesting path for the growth of speech recognition algorithms. It is the ability of deciphering spoken words by evaluating visual cues from lip movements. In this study, we suggest a unique method for lip reading that converts lip motions into textual representations by using deep neural networks. Convolutional neural networks are used in the methodology to extract visual features, recurrent neural networks are used to simulate temporal context, and the Connectionist Temporal Classification loss function is used to align lip features with corresponding phonemes.

The study starts with a thorough investigation of data loading methods, which include alignment extraction and video preparation. A well selected dataset with video clips and matching phonetic alignments is presented. We select relevant face regions, convert frames to grayscale, then standardize the resulting data so that it can be fed into a neural network.

The neural network architecture is presented in depth, displaying a series of bidirectional LSTM layers for temporal context understanding after 3D convolutional layers for spatial feature extraction. Careful consideration of input shapes, layer combinations, and parameter selections forms the foundation of the model's design. To train the model, we align predicted phoneme sequences with ground truth alignments using the CTC loss.

Dynamic learning rate scheduling and a unique callback mechanism for training visualization of predictions are integrated into the training process. After training on a sizable dataset, the model exhibits remarkable convergence and proves its capacity to understand intricate temporal correlations.

Through the use of both quantitative and qualitative evaluations, the results are thoroughly assessed. We visually check the model's lip reading abilities and assess its performance using common speech recognition criteria. It is explored how different model topologies and hyperparameters affect performance, offering guidance for future research.

The trained model is tested on external video samples to show off its practical application. Its accuracy and resilience in lip-reading spoken phrases are demonstrated.

By providing a deep learning framework for precise and effective speech recognition, this research adds to the rapidly changing field of lip reading devices. The results offer opportunities for additional development and implementation in various fields, such as assistive technologies, audio-visual communication systems, and human-computer interaction.

Downloads

Download data is not yet available.

References

A lip reading method based on 3D convolutional vision transformer [Wang, Huijuan, Gangqiang Pu, and Tingyu Chen]

Deshmukh, N., Ahire, A., Bhandari, S. H., Mali, A., & Warkari, K. (2021). "Vision based Lip Reading System using Deep Learning." In 2021 International Conference on Computing, Communication and Green Engineering (CCGE) (pp. 1-6). IEEE. doi: 10.1109/CCGE50943.2021.9776430 DOI: https://doi.org/10.1109/CCGE50943.2021.9776430

Lu, Y., & Li, H. (2019). "Automatic Lip-Reading System Based on Deep Convolutional Neural Network and Attention-Based Long Short-Term Memory." Appl. Sci., 9, 1599. doi: 10.3390/app9081599 DOI: https://doi.org/10.3390/app9081599

Scanlon, P., Reilly, R., & de Chazal, P. (2003). "Visual Feature Analysis for Automatic Speech reading." In International Conference on Audio-Visual Speech Processing.

Kapkar, P. P., & Bharkad, S. D. (2019). "Lip Feature Extraction and Movement Recognition Methods." International Journal of Scientific & Technology Research, 8.

Ozcan, T., & Basturk, A. (2019). "Lip Reading Using Convolutional Neural Networks with and without Pre-Trained Models." Balkan Journal of Electrical and Computer Engineering, 7(2). DOI: https://doi.org/10.17694/bajece.479891

Garg, A., Noyola, J., & Bagadia, S. (2016). "Lip reading using CNN and LSTM."

Gutierrez, A., & Robert, Z-A. (2017). "Lip Reading Word Classification." Stanford University.

S. Fenghour, D. Chen, K. Guo and P. Xiao, "Lip Reading Sentences Using Deep Learning With Only Visual Cues," in IEEE Access, vol. 8, pp. 215516-215530, 2020, doi: 10.1109/ACCESS.2020.3040906. DOI: https://doi.org/10.1109/ACCESS.2020.3040906

K. Vayadande, T. Adsare, N. Agrawal, T. Dharmik, A. Patil and S. Zod, "LipReadNet: A Deep Learning Approach to Lip Reading," 2023 International Conference on Applied Intelligence and Sustainable Computing (ICAISC), Dharwad, India, 2023, pp. 1-6, doi: 10.1109/ICAISC58445.2023.10200426. DOI: https://doi.org/10.1109/ICAISC58445.2023.10200426

Downloads

Published

07-04-2024

Issue

Section

Research Articles

How to Cite

[1]
Ameen Hafeez, Rohith M K, Sakshi Prashant, Sinchana Hegde, and Prof. Shwetha K S, “Survey on Silentinterpreter :  Analysis of Lip Movement and  Extracting Speech using Deep Learning”, Int J Sci Res Sci Eng Technol, vol. 11, no. 2, pp. 183–191, Apr. 2024, doi: 10.32628/IJSRSET2411219.

Similar Articles

1-10 of 93

You may also start an advanced similarity search for this article.