Abstract
Automated assessment of visual sentiment has many applications, such as monitoring social media and facilitating online advertising. In current research on automated visual sentiment assessment, images are mainly input and processed as a whole. However, human attention is biased, and a focal region with high acuity can disproportionately influence visual sentiment. To investigate how attention influences visual sentiment, we conducted experiments that reveal critical insights into human perception. We discover that negative sentiments are elicited by the focal region without a notable influence of contextual information, whereas positive sentiments are influenced by both focal and contextual information. Building on these insights, we create new deep convolutional neural networks for sentiment prediction that have additional channels devoted to encoding focal information. On two benchmark datasets, the proposed models demonstrate superior performance compared with the state-of-the-art methods. Extensive visualizations and statistical analyses indicate that the focal channels are more effective on images with focal objects, especially for images that also elicit negative sentiments.
Original language | English (US) |
---|---|
Title of host publication | MM 2017 - Proceedings of the 2017 ACM Multimedia Conference |
Publisher | Association for Computing Machinery, Inc |
Pages | 217-225 |
Number of pages | 9 |
ISBN (Electronic) | 9781450349062 |
DOIs | |
State | Published - Oct 23 2017 |
Event | 25th ACM International Conference on Multimedia, MM 2017 - Mountain View, United States Duration: Oct 23 2017 → Oct 27 2017 |
Publication series
Name | MM 2017 - Proceedings of the 2017 ACM Multimedia Conference |
---|
Other
Other | 25th ACM International Conference on Multimedia, MM 2017 |
---|---|
Country/Territory | United States |
City | Mountain View |
Period | 10/23/17 → 10/27/17 |
Bibliographical note
Publisher Copyright:© 2017 ACM.
Keywords
- Neural network
- Social multimedia
- Visual sentiment