Improving Token-based World Models With Parallel Observation Prediction · The Large Language Model Bible Contribute to LLM-Bible

Improving Token-based World Models With Parallel Observation Prediction

Cohen Lior, Wang Kaixin, Kang Bingyi, Mannor Shie. Arxiv 2024

[Paper] [Code]    
Agentic Has Code Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

Motivated by the success of Transformers when applied to sequences of discrete symbols, token-based world models (TBWMs) were recently proposed as sample-efficient methods. In TBWMs, the world model consumes agent experience as a language-like sequence of tokens, where each observation constitutes a sub-sequence. However, during imagination, the sequential token-by-token generation of next observations results in a severe bottleneck, leading to long training times, poor GPU utilization, and limited representations. To resolve this bottleneck, we devise a novel Parallel Observation Prediction (POP) mechanism. POP augments a Retentive Network (RetNet) with a novel forward mode tailored to our reinforcement learning setting. We incorporate POP in a novel TBWM agent named REM (Retentive Environment Model), showcasing a 15.4x faster imagination compared to prior TBWMs. REM attains superhuman performance on 12 out of 26 games of the Atari 100K benchmark, while training in less than 12 hours. Our code is available at \url{https://github.com/leor-c/REM}.

Similar Work