Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games
Abstract
Orak is a benchmark for training and evaluating LLM agents across diverse video games, featuring a plug-and-play interface and fine-tuning datasets to enhance agentic modules and gameplay.
Large Language Model (LLM) agents are reshaping the game industry, particularly with more intelligent and human-preferable game characters. However, existing game benchmarks fall short of practical needs: they lack evaluations of diverse LLM capabilities across various game genres, studies of agentic modules crucial for complex gameplay, and fine-tuning datasets for aligning pre-trained LLMs into gaming agents. To fill these gaps, we present \benchname{}, a foundational benchmark designed to train and evaluate LLM agents across diverse real-world video games. Unlike existing benchmarks, Orak includes 12 popular video games spanning all major genres, enabling comprehensive studies of LLM capabilities and agentic modules essential for intricate game scenarios. To support consistent evaluation of LLMs, we introduce a plug-and-play interface based on Model Context Protocol (MCP) that enables LLMs to seamlessly connect with games and manipulate agentic modules. Additionally, we propose a fine-tuning dataset, consisting of LLM gameplay trajectories across diverse game genres. Orak offers a comprehensive evaluation framework, encompassing general game score leaderboards, LLM battle arenas, and in-depth analyses of visual input state, agentic strategies, and fine-tuning effects, establishing a foundation towards building generic gaming agents. Code is available at https://github.com/krafton-ai/Orak.
Community
We introduce Orak (오락), a foundational benchmark for training and evaluating LLM agents on diverse real-world video games.
*The name Orak comes from 오락 (orak), the native Korean word meaning "game".
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning (2025)
- VideoGameQA-Bench: Evaluating Vision-Language Models for Video Game Quality Assurance (2025)
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning (2025)
- Werewolf: A Straightforward Game Framework with TTS for Improved User Engagement (2025)
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency (2025)
- VideoGameBench: Can Vision-Language Models complete popular video games? (2025)
- 4Hammer: a board-game reinforcement learning environment for the hour long time frame (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper