Representing Objects in Video as Space-Time Volumes by Combining Top-Down and Bottom-Up Processes

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Abstract

As top-down based approaches of object recognition from video are getting more powerful, a structured way to combine them with bottom-up grouping processes becomes feasible. When done right, the resulting representation is able to describe objects and their decomposition into parts at appropriate spatio-temporal scales.We propose a method that uses a modern object detector to focus on salient structures in video, and a dense optical flow estimator to supplement feature extraction. From these structures we extract space-time volumes of interest (STVIs) by smoothing in spatio-temporal Gaussian Scale Space that guides bottom-up grouping.The resulting novel representation enables us to analyze and visualize the decomposition of an object into meaningful parts while preserving temporal object continuity. Our experimental validation is twofold. First, we achieve competitive results on a common video object segmentation benchmark. Second, we extend this benchmark with high quality object part annotations, DAVIS Parts, on which we establish a strong baseline by showing that our method yields spatio-temporally meaningful object parts. Our new representation will support applications that require high-level space-time reasoning at the parts level.
Originalspracheenglisch
Titel2020 Winter Conference on Applications of Computer Vision
Seiten1914-1922
PublikationsstatusVeröffentlicht - 1 Mär 2020
VeranstaltungIEEE Winter Conference on Applications of Computer Vision - Snowmass Village, USA / Vereinigte Staaten
Dauer: 2 Mär 20204 Mär 2020

Konferenz

KonferenzIEEE Winter Conference on Applications of Computer Vision
KurztitelWACV 2020
LandUSA / Vereinigte Staaten
OrtSnowmass Village
Zeitraum2/03/204/03/20

    Fingerprint

Dieses zitieren

Ilic, F., & Pinz, A. (2020). Representing Objects in Video as Space-Time Volumes by Combining Top-Down and Bottom-Up Processes. in 2020 Winter Conference on Applications of Computer Vision (S. 1914-1922)