Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data

Adrian Remonda, Sarah Krebs, Eduardo Enrique Veas, Granit Luzhnica, Roman Kern

Research output: Contribution to conferencePaperResearchpeer-review

Abstract

This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space. To find out which RL methods better solve the problem and whether the obtained models generalize to driving on unknown tracks, we put 10 variants of deep de-terministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Original languageEnglish
Publication statusPublished - 2019
EventWorkshop on Scaling-Up Reinforcement Learning : SURL - Macau, China
Duration: 10 Aug 201916 Aug 2019
http://surl.tirl.info/?p=program&y=2019

Workshop

WorkshopWorkshop on Scaling-Up Reinforcement Learning
CountryChina
Period10/08/1916/08/19
Internet address

Fingerprint

Reinforcement learning
Telemetering
Railroad cars
Passenger cars
Experiments

Keywords

  • Reinforcement Learning
  • Autonomous driving

Cite this

Remonda, A., Krebs, S., Veas, E. E., Luzhnica, G., & Kern, R. (2019). Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data. Paper presented at Workshop on Scaling-Up Reinforcement Learning , China.

Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data. / Remonda, Adrian; Krebs, Sarah; Veas, Eduardo Enrique; Luzhnica, Granit; Kern, Roman.

2019. Paper presented at Workshop on Scaling-Up Reinforcement Learning , China.

Research output: Contribution to conferencePaperResearchpeer-review

Remonda, A, Krebs, S, Veas, EE, Luzhnica, G & Kern, R 2019, 'Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data' Paper presented at Workshop on Scaling-Up Reinforcement Learning , China, 10/08/19 - 16/08/19, .
Remonda A, Krebs S, Veas EE, Luzhnica G, Kern R. Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data. 2019. Paper presented at Workshop on Scaling-Up Reinforcement Learning , China.
Remonda, Adrian ; Krebs, Sarah ; Veas, Eduardo Enrique ; Luzhnica, Granit ; Kern, Roman. / Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data. Paper presented at Workshop on Scaling-Up Reinforcement Learning , China.
@conference{fca9e8ec584442b3985fcca818923efc,
title = "Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data",
abstract = "This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space. To find out which RL methods better solve the problem and whether the obtained models generalize to driving on unknown tracks, we put 10 variants of deep de-terministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.",
keywords = "Reinforcement Learning, Autonomous driving",
author = "Adrian Remonda and Sarah Krebs and Veas, {Eduardo Enrique} and Granit Luzhnica and Roman Kern",
year = "2019",
language = "English",
note = "Workshop on Scaling-Up Reinforcement Learning : SURL ; Conference date: 10-08-2019 Through 16-08-2019",
url = "http://surl.tirl.info/?p=program&y=2019",

}

TY - CONF

T1 - Formula RL: Deep Reinforcement Learning for Autonomous Racing using Telemetry Data

AU - Remonda, Adrian

AU - Krebs, Sarah

AU - Veas, Eduardo Enrique

AU - Luzhnica, Granit

AU - Kern, Roman

PY - 2019

Y1 - 2019

N2 - This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space. To find out which RL methods better solve the problem and whether the obtained models generalize to driving on unknown tracks, we put 10 variants of deep de-terministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.

AB - This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space. To find out which RL methods better solve the problem and whether the obtained models generalize to driving on unknown tracks, we put 10 variants of deep de-terministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.

KW - Reinforcement Learning

KW - Autonomous driving

M3 - Paper

ER -