Abstract
Original language | English |
---|---|
Title of host publication | 2018 IEEE Intelligent Transportation Systems Conference, ITSC 2018 |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 3212-3217 |
Number of pages | 6 |
Volume | 2018-November |
ISBN (Electronic) | 9781728103235 |
DOIs | |
Publication status | Published - 7 Dec 2018 |
Event | 21st IEEE International Conference on Intelligent Transportation Systems - Maui, United States Duration: 4 Nov 2018 → 7 Nov 2018 |
Conference
Conference | 21st IEEE International Conference on Intelligent Transportation Systems |
---|---|
Abbreviated title | ITSC |
Country | United States |
City | Maui |
Period | 4/11/18 → 7/11/18 |
Fingerprint
Cite this
Deep 2.5D Vehicle Classification with Sparse SfM Depth Prior for Automated Toll Systems. / Waltner, Georg; Maurer, Michael; Holzmann, Thomas; Ruprecht, Patrick; Opitz, Michael; Possegger, Horst; Fraundorfer, Friedrich; Bischof, Horst.
2018 IEEE Intelligent Transportation Systems Conference, ITSC 2018. Vol. 2018-November Institute of Electrical and Electronics Engineers, 2018. p. 3212-3217 8569670.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
}
TY - GEN
T1 - Deep 2.5D Vehicle Classification with Sparse SfM Depth Prior for Automated Toll Systems
AU - Waltner, Georg
AU - Maurer, Michael
AU - Holzmann, Thomas
AU - Ruprecht, Patrick
AU - Opitz, Michael
AU - Possegger, Horst
AU - Fraundorfer, Friedrich
AU - Bischof, Horst
PY - 2018/12/7
Y1 - 2018/12/7
N2 - Automated toll systems rely on proper classification of the passing vehicles. This is especially difficult when the images used for classification only cover parts of the vehicle. To obtain information about the whole vehicle. we reconstruct the vehicle as 3D object and exploit this additional information within a Convolutional Neural Network (CNN). However, when using deep networks for 3D object classification, large amounts of dense 3D models are required for good accuracy, which are often neither available nor feasible to process due to memory requirements. Therefore, in our method we reproject the 3D object onto the image plane using the reconstructed points, lines or both. We utilize this sparse depth prior within an auxiliary network branch that acts as a regularizer during training. We show that this auxiliary regularizer helps to improve accuracy compared to 2D classification on a real-world dataset. Furthermore due to the design of the network, at test time only the 2D camera images are required for classification which enables the usage in portable computer vision systems.
AB - Automated toll systems rely on proper classification of the passing vehicles. This is especially difficult when the images used for classification only cover parts of the vehicle. To obtain information about the whole vehicle. we reconstruct the vehicle as 3D object and exploit this additional information within a Convolutional Neural Network (CNN). However, when using deep networks for 3D object classification, large amounts of dense 3D models are required for good accuracy, which are often neither available nor feasible to process due to memory requirements. Therefore, in our method we reproject the 3D object onto the image plane using the reconstructed points, lines or both. We utilize this sparse depth prior within an auxiliary network branch that acts as a regularizer during training. We show that this auxiliary regularizer helps to improve accuracy compared to 2D classification on a real-world dataset. Furthermore due to the design of the network, at test time only the 2D camera images are required for classification which enables the usage in portable computer vision systems.
UR - https://arxiv.org/abs/1805.03511
UR - http://www.scopus.com/inward/record.url?scp=85060441996&partnerID=8YFLogxK
U2 - 10.1109/ITSC.2018.8569670
DO - 10.1109/ITSC.2018.8569670
M3 - Conference contribution
VL - 2018-November
SP - 3212
EP - 3217
BT - 2018 IEEE Intelligent Transportation Systems Conference, ITSC 2018
PB - Institute of Electrical and Electronics Engineers
ER -