A Reinforcement Learning Environment For Job-Shop Scheduling

Pierre Paul Alain Tassel*, Martin Gebser, Konstantin Schekotihin

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

Scheduling is a fundamental task occurring in various automated systems applications, e.g., optimal schedules for machines on a job shop allow for a reduction of production costs and waste. However, finding such schedules is often intractable and cannot be achieved by Combinatorial Optimization Problem (COP) methods within a given time limit. Recent advances of Deep Reinforcement Learning (DRL) in learning complex behavior enable new COP application possibilities. This paper presents an efficient DRL environment for Job-Shop Scheduling – an important problem in the field. Furthermore, we design a meaningful and compact state representation as well as a novel, simple dense reward function, closely related to the sparse make-span minimization criteria used by COP methods.
We demonstrate that our approach significantly outperforms existing DRL methods on classic benchmark instances, coming close to state-of-the-art COP approaches.
Original languageEnglish
Publication statusPublished - Aug 2021
Event2021 PRL Workshop – Bridging the Gap Between AI Planning and Reinforcement Learning - Virtuell, China
Duration: 5 Aug 20216 Aug 2021

Conference

Conference2021 PRL Workshop – Bridging the Gap Between AI Planning and Reinforcement Learning
Country/TerritoryChina
CityVirtuell
Period5/08/216/08/21

Fingerprint

Dive into the research topics of 'A Reinforcement Learning Environment For Job-Shop Scheduling'. Together they form a unique fingerprint.

Cite this