Vergleich von kontinuierlichen Single-Agent Reinforcement Learning-Steuerungen in einer simulierten Logistikumgebung mit NVIDIA Omniverse
DOI:
https://doi.org/10.2195/lj_proc_wesselhoeft_en_202310_01Keywords:
Autonome Roboter, Künstliche Intelligenz, Logistik 4.0, Reinforcement Learning, Robotik, artificial intelligence, autonomous mobile robots, logistics 4.0, reinforcement learning, roboticsAbstract
With the transition to Logistics 4.0, the increasing demand for autonomous mobile robots (AMR) in logistics has amplified the complexity of fleet control in dynamic environments. Reinforcement learning (RL), particularly decentralized RL algorithms, has emerged as a potential solution given its ability to learn in uncertain terrains. While discrete RL structures have shown merit, their adaptability in logistics remains questionable due to their inherent limitations. This paper presents a comparative analysis of continuous RL algorithms - Advantage Actor-Critic (A2C), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO) - in the context of controlling a Turtlebot3 within a warehouse scenario. Our findings reveal A2C as the frontrunner in terms of success rate and training time, while DDPG excels step minimization while PPO distinguishes itself primarily through its relatively short training duration. This study underscores the potential of continuous RL algorithms, especially A2C, in the future of AMR fleet management in logistics. Significant work remains to be done, particularly in the area of algorithmic fine-tuning.Downloads
Published
2023-10-11
How to Cite
Wesselhöft, M., Braun, P., & Kreutzfeldt, J. (2023). Vergleich von kontinuierlichen Single-Agent Reinforcement Learning-Steuerungen in einer simulierten Logistikumgebung mit NVIDIA Omniverse. Logistics Journal: Proceedings, (19). https://doi.org/10.2195/lj_proc_wesselhoeft_en_202310_01
Issue
Section
Artikel
License
Copyright (c) 2023 Mike Wesselhöft, Philipp Braun, Jochen Kreutzfeldt

This work is licensed under a Creative Commons Attribution 4.0 International License.