Blind Image Quality Estimation Using Deep Neural Networks Explicit Image Position

Authors

  • N. Siva Parvathi  P G Student, Department of MCA, St. Anns College of Engineering & Technology, Chirala, Andhra Pradesh, India
  • P. S. Naveen Kumar  Assistant Professor, Department of MCA, St. Ann’s College of Engineering & Technology, Chirala, Andhra Pradesh, India

Keywords:

Wake by distribution model, Support vector machine, Blind Image Quality Assessment, Gradient Magnitude, Palladian of Gaussian, Pseudo Distance Technique

Abstract

Blind image quality assessment (BIQA) is very challenging problem due to the unavailability of a reference image. State of-the-art BIQA methods usually learn to find the image quality by regression from human subjective scores of the training samples. These methods uses to large number of human scored images for training, and lack an explicit explanation of the image quality is affected by image local features. We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS) is predict the subjective quality of a distorted image without reference to corresponding distortion less image and without any training results on human opinion scores of distorted images. Different from most deep neural networks (DNN), We take biologically inspired generalized divisive normalization (GDN) instead of rectified linear unit (ReLU) as the activation function. We empirically demonstrate that GDN is effective at reducing model parameters while achieving similar quality prediction results. The proposed model is extensively find the large scale benchmark databases to deliver high performance with state-of-the-art BIQA models as well as with some well-known full reference image quality assessment models. The proposed algorithm uses natural scene statistics in spatial domain for generating wake by distribution statistical model to extract quality aware features. The features are fed to an SVM regression model to predict quality score of input image without any information about the distortions type or reference image.

References

  1. H. R. Wu and K. R. Rao, Digital Video Image Quality and Perceptual Coding. CRC press, 2005.
  2. A. C. Bovik, “Automatic prediction of perceptual image and video quality,” Proceedings of the IEEE, vol. 101, no. 9, pp. 2008-2024, 2013.
  3. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.
  4. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Asilomar Conference on Signals, Systems and Computers, vol. 2. IEEE, 2003, pp. 1398-1402.
  5. “CISCO VNI Report.” Networking Solutions White Paper.html.
  6. Wang, Z., Wu, G., Sheikh, H. R., Simoncelli, E. P., Yang, E. H., and Bovik, A. C., “Quality-aware images,” IEEE Trans Image Process 15, 1680-1689 (2006)
  7. Z. Wang and A. C. Bovik, “Reduced-and no-reference image quality assessment: The natural scene statistic model approach,” IEEE Signal Processing Magazine, vol. 28, no. 6, pp. 29-40, Nov. 2011.
  8. K. Ma, Q. Wu, Z. Wang, Z. Duanmu, H. Yong, H. Li, and L. Zhang, “Group MAD competition − a new methodology to compare objective image quality models,” in IEEE Conference on Computer Vsion and Pattern Recognition, 2016, pp. 1664-1673.
  9. A. K. Moorthy and A. C. Bovik, “Blind image quality assessment: From natural scene statistics to perceptual quality,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3350-3364, Dec. 2011.
  10. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, Dec. 2012.
  11. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
  12. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.
  13. N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, and C.-C. J. Kuo, “Image database TID2013: Peculiarities, results and perspectives,” Signal Processing: Image Communication, vol. 30, pp. 57-77, Jan. 2015.
  14. S. Bianco, L. Celona, P. Napoletano, and R. Schettini, “On the use of deep learning for blind image quality assessment,” CoRR, vol. abs/1602.05531, 2016
  15. A. C. Bovik, "Automatic Prediction of Perceptual Image and Video Quality," Proceedings of the IEEE, 101(9), pp.2008, 2024, Sept. 2013.
  16. Z. Wang, "Applications of objective image quality assessment methods,” IEEE Signal Processing Magazine, vol. 28, Nov. 2011
  17. M. T. Orchard and C. A. Bouman, BColor quantization of images, IEEE Trans. Signal Process., vol. 39, no. 12,pp. 2677- 2690, Dec. 1991.
  18. P. Kovesi. Image features from phase congruency. Journal of Computer Vision Research, 1(3):1-26, 1999.
  19. E. Larson and D. Chandler. Most apparent distortion: fullreference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1):011006, 2010.
  20. D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, 2001
  21. P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi. A no-reference perceptual blur metric. In ICIP, 2002
  22. Zaric, A., Loncaric, M., Tralic, D., Brzica, M., Dumic, E., Grgic, S.: Image quality assessment - comparison of objective measures with results of subjective test. In: ELMAR, 2010 proceedings, pp. 113-118 (2010)
  23. Zhou, W., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600-612 (2004)
  24. Dacheng, T., Xuelong, L., Wen, L., Xinbo, G.: Reduced-reference IQA in contourlet domain. IEEE Trans. Syst. Man Cybern. Part B Cybern. 39, 1623-1627 (2009)
  25. A. K. Moorthy and A. C. Bovik, “A two-step framework for constructing blind image quality indices,” IEEE Signal Process. Lett., vol. 17, no. 5, pp. 513-516, 2010.
  26. M. A. Saad, A. C. Bovik, and C. Charrier, “A DCT statistics-based blind image quality index,” IEEE Signal Process. Lett., vol. 17, no. 6, pp. 583-586, 2010.
  27. T. Hofmann, “Unsupervised learning by probabilistic latent semantic analysis,” Mach. Learn., vol. 42, no. 1, pp. 177-196, 2001
  28. Huynh-Thu, Q., Ghanbari, M.: Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 44, 800-801 (2008)
  29. Griffiths, G.A.: A theoretically based Wakeby distribution for annual flood series. Hydrol. Sci. J. 34, 231-248 (1989)
  30. Oztekin, T.: Estimation of the parameters of wakeby distribution by a numerical least squares method and applying it to the annual peak flows of Turkish rivers. Water Resour. Manage. 25, 1299-1313 (2011)

Downloads

Published

2018-04-30

Issue

Section

Research Articles

How to Cite

[1]
N. Siva Parvathi, P. S. Naveen Kumar, " Blind Image Quality Estimation Using Deep Neural Networks Explicit Image Position, International Journal of Scientific Research in Science, Engineering and Technology(IJSRSET), Print ISSN : 2395-1990, Online ISSN : 2394-4099, Volume 4, Issue 1, pp.1550-1555, January-February-2018.