3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection

Christian Häne, Lionel Heng, Gim Hee Lee, Friedrich Fraundorfer, Paul Furgale, Torsten Sattler, Marc Pollefeys

Publikation: Beitrag in einer FachzeitschriftArtikelForschungBegutachtung

Abstract

Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.

Originalspracheenglisch
Seiten (von - bis)14-27
Seitenumfang14
FachzeitschriftImage and Vision Computing
Jahrgang68
DOIs
PublikationsstatusVeröffentlicht - 1 Dez 2017

Fingerprint

Railroad cars
Cameras
Calibration
Pipelines
Navigation
Parking
Processing
Accidents
Availability
Sensors
Costs

Schlagwörter

    ASJC Scopus subject areas

    • !!Signal Processing
    • !!Computer Vision and Pattern Recognition

    Dies zitieren

    3D visual perception for self-driving cars using a multi-camera system : Calibration, mapping, localization, and obstacle detection. / Häne, Christian; Heng, Lionel; Lee, Gim Hee; Fraundorfer, Friedrich; Furgale, Paul; Sattler, Torsten; Pollefeys, Marc.

    in: Image and Vision Computing, Jahrgang 68, 01.12.2017, S. 14-27.

    Publikation: Beitrag in einer FachzeitschriftArtikelForschungBegutachtung

    Häne, Christian ; Heng, Lionel ; Lee, Gim Hee ; Fraundorfer, Friedrich ; Furgale, Paul ; Sattler, Torsten ; Pollefeys, Marc. / 3D visual perception for self-driving cars using a multi-camera system : Calibration, mapping, localization, and obstacle detection. in: Image and Vision Computing. 2017 ; Jahrgang 68. S. 14-27.
    @article{7760c794388c48b285a2323ca5f910d3,
    title = "3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection",
    abstract = "Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.",
    keywords = "Calibration, Fisheye camera, Localization, Mapping, Multi-camera system, Obstacle detection",
    author = "Christian H{\"a}ne and Lionel Heng and Lee, {Gim Hee} and Friedrich Fraundorfer and Paul Furgale and Torsten Sattler and Marc Pollefeys",
    year = "2017",
    month = "12",
    day = "1",
    doi = "10.1016/j.imavis.2017.07.003",
    language = "English",
    volume = "68",
    pages = "14--27",
    journal = "Image and Vision Computing",
    issn = "0262-8856",
    publisher = "Elsevier GmbH",

    }

    TY - JOUR

    T1 - 3D visual perception for self-driving cars using a multi-camera system

    T2 - Calibration, mapping, localization, and obstacle detection

    AU - Häne, Christian

    AU - Heng, Lionel

    AU - Lee, Gim Hee

    AU - Fraundorfer, Friedrich

    AU - Furgale, Paul

    AU - Sattler, Torsten

    AU - Pollefeys, Marc

    PY - 2017/12/1

    Y1 - 2017/12/1

    N2 - Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.

    AB - Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.

    KW - Calibration

    KW - Fisheye camera

    KW - Localization

    KW - Mapping

    KW - Multi-camera system

    KW - Obstacle detection

    UR - http://www.scopus.com/inward/record.url?scp=85028449152&partnerID=8YFLogxK

    U2 - 10.1016/j.imavis.2017.07.003

    DO - 10.1016/j.imavis.2017.07.003

    M3 - Article

    VL - 68

    SP - 14

    EP - 27

    JO - Image and Vision Computing

    JF - Image and Vision Computing

    SN - 0262-8856

    ER -