Representing Objects in Video as Space-Time Volumes by Combining Top-Down and Bottom-Up Processes

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

As top-down based approaches of object recognition from video are getting more powerful, a structured way to combine them with bottom-up grouping processes becomes feasible. When done right, the resulting representation is able to describe objects and their decomposition into parts at appropriate spatio-temporal scales.We propose a method that uses a modern object detector to focus on salient structures in video, and a dense optical flow estimator to supplement feature extraction. From these structures we extract space-time volumes of interest (STVIs) by smoothing in spatio-temporal Gaussian Scale Space that guides bottom-up grouping.The resulting novel representation enables us to analyze and visualize the decomposition of an object into meaningful parts while preserving temporal object continuity. Our experimental validation is twofold. First, we achieve competitive results on a common video object segmentation benchmark. Second, we extend this benchmark with high quality object part annotations, DAVIS Parts, on which we establish a strong baseline by showing that our method yields spatio-temporally meaningful object parts. Our new representation will support applications that require high-level space-time reasoning at the parts level.
Original languageEnglish
Title of host publication2020 Winter Conference on Applications of Computer Vision
Pages1914-1922
Publication statusPublished - 1 Mar 2020
EventIEEE Winter Conference on Applications of Computer Vision - Snowmass Village, United States
Duration: 2 Mar 20204 Mar 2020

Conference

ConferenceIEEE Winter Conference on Applications of Computer Vision
Abbreviated titleWACV 2020
CountryUnited States
CitySnowmass Village
Period2/03/204/03/20

Fingerprint Dive into the research topics of 'Representing Objects in Video as Space-Time Volumes by Combining Top-Down and Bottom-Up Processes'. Together they form a unique fingerprint.

  • Cite this

    Ilic, F., & Pinz, A. (2020). Representing Objects in Video as Space-Time Volumes by Combining Top-Down and Bottom-Up Processes. In 2020 Winter Conference on Applications of Computer Vision (pp. 1914-1922)