Object Detection and Sentence Generation from Images
Keywords:
Computer vision, Object detection, RNNAbstract
Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task. The ultimate goal is to generate descriptions of image regions. A model that generates natural language descriptions of images and their regions is thus developed. The approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. A Multimodal Recurrent Neural Network architecture is described that uses the inferred alignments to learn to generate novel descriptions of image regions. The alignment model produces state of the art results in retrieval experiments on Flickr8K dataset. The generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.
References
- Andrej Karpathy Li Fei-Fei “Show and Tell: A Neural Image Caption Generator”, In CVPR, 2015
- Mike Schuster and Kuldip K. Paliwal, “Bidirectional Recurrent Neural Networks “, Member IEEE,June 2006
- M. Hodosh, P. Young, and J. Hockenmaier,”Framing image description as a ranking task: data, models and evaluation Metrics” ,Journal of Artificial Intelligence Research, 2013
- D. Elliott and F. Keller,”Image description using visual dependency representations” In EMNLP, pages 1292–1302, 2013.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRSET

This work is licensed under a Creative Commons Attribution 4.0 International License.