作者: Yu, JG (Yu, Jin-Gang); Xia, GS (Xia, Gui-Song); Gao, CX (Gao, Changxin); Samal, A (Samal, Ashok)
|
摘要: The past few years have witnessed impressive progress on the research of salient object detection. Nevertheless, existing approaches still cannot perform satisfactorily in the case of complex scenes, particularly when the salient objects have non-uniform appearance or complicated shapes, and the background is complexly structured. One important reason for such limitations may be that these approaches commonly ignore the factor of perceptual grouping in saliency modeling. To address this issue, this paper presents a novel computational model for object-based visual saliency, which explicitly takes into consideration the connections between attention and perceptual grouping, and incorporates Gestalt grouping cues into saliency computation. Inspired by the sensory enhancement theory, we suggest a paradigm for object-based saliency modeling, that is, object-based saliency stems from spreading attention along Gestalt grouping cues. Computationally, three typical Gestalt cues, including proximity, similarity, and closure, are respectively extracted from the given image, which are then integrated by constructing a unified Gestalt graph. A new algorithm named personalized power iteration clustering is developed to effectively fulfill the spreading of attention information across the Gestalt graph. Intensive experiments have been carried out to demonstrate the superior performance of the proposed model in comparison to the state-of-the-art.
|