Deep Insights into Convolutional Networks for Video Recognition

Christoph Feichtenhofer, Axel Pinz, Richard Wildes, Andrew Zisserman

Research output: Contribution to journalArticleResearchpeer-review

Abstract

As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.
Original languageEnglish
Number of pages18
JournalInternational Journal of Computer Vision
DOIs
Publication statusE-pub ahead of print - 29 Oct 2019

Fingerprint

Invariance
Computer vision
Fusion reactions
Visualization
Detectors

Keywords

  • Computer vision
  • Machine learning
  • Deep Learning
  • Video recognition
  • Neural network visualization
  • Action recognition

Fields of Expertise

  • Information, Communication & Computing

Cite this

Deep Insights into Convolutional Networks for Video Recognition. / Feichtenhofer, Christoph; Pinz, Axel; Wildes, Richard; Zisserman, Andrew.

In: International Journal of Computer Vision, 29.10.2019.

Research output: Contribution to journalArticleResearchpeer-review

Feichtenhofer, Christoph ; Pinz, Axel ; Wildes, Richard ; Zisserman, Andrew. / Deep Insights into Convolutional Networks for Video Recognition. In: International Journal of Computer Vision. 2019.
@article{fb97f7d1228044bcb276c5adc5964349,
title = "Deep Insights into Convolutional Networks for Video Recognition",
abstract = "As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.",
keywords = "Computer vision, Machine learning, Deep Learning, Video recognition, Neural network visualization, Action recognition",
author = "Christoph Feichtenhofer and Axel Pinz and Richard Wildes and Andrew Zisserman",
year = "2019",
month = "10",
day = "29",
doi = "10.1007/s11263-019-01225-w",
language = "English",
journal = "International Journal of Computer Vision",
issn = "0920-5691",
publisher = "Springer Vieweg",

}

TY - JOUR

T1 - Deep Insights into Convolutional Networks for Video Recognition

AU - Feichtenhofer, Christoph

AU - Pinz, Axel

AU - Wildes, Richard

AU - Zisserman, Andrew

PY - 2019/10/29

Y1 - 2019/10/29

N2 - As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.

AB - As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing the internal representation of models that have been trained to recognize actions in video. We visualize multiple two-stream architectures to show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncrasies of training data and to explain failure cases of the system.

KW - Computer vision

KW - Machine learning

KW - Deep Learning

KW - Video recognition

KW - Neural network visualization

KW - Action recognition

U2 - 10.1007/s11263-019-01225-w

DO - 10.1007/s11263-019-01225-w

M3 - Article

JO - International Journal of Computer Vision

JF - International Journal of Computer Vision

SN - 0920-5691

ER -