Discovering the Spatial Extent of Relative Attributes

teaserImage

Presented at ICCV 2015

People

Abstract

We present a weakly-supervised approach that discovers the spatial extent of relative attributes, given only pairs of ordered images. In contrast to traditional approaches that use global appearance features or rely on keypoint detectors, our goal is to automatically discover the image regions that are relevant to the attribute, even when the attribute's appearance changes drastically across its attribute spectrum. To accomplish this, we first develop a novel formulation that combines a detector with local smoothness to discover a set of coherent visual chains across the image collection. We then introduce an efficient way to generate additional chains anchored on the initial discovered ones. Finally, we automatically identify the most relevant visual chains, and create an ensemble image representation to model the attribute. Through extensive experiments, we demonstrate our method's promise relative to several baselines in modeling relative attributes.

Paper

 
Fanyi Xiao and Yong Jae Lee
Discovering the Spatial Extent of Relative Attributes
In ICCV 2015 [Show BibTex]

Book Chapter

 
Editors: Feris, Rogerio Schmidt, Lampert, Christoph, Parikh, Devi (Eds.)
Visual Attributes

Additional Materials

Source Code

LRA_release.tar.gz

Acknowledgments

This research was supported partially by:

Comments, questions to Fanyi Xiao