--- license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers base_model: - Qwen/Qwen2-VL-7B --- # WebDreamer: Model-Based Planning for Web Agents WebDreamer is a planning framework that enables efficient and effective planning for real-world web agent tasks. Check our paper for more details. This work is a collaboration between [OSUNLP](https://x.com/osunlp) and [Orby AI](https://www.orby.ai/). ![image](https://github.com/user-attachments/assets/a1189fee-ff43-45fc-a818-3dc6befb6ad2) - **Repository:** https://github.com/OSU-NLP-Group/WebDreamer - **Paper:** https://arxiv.org/abs/2411.06559 - **Point of Contact:** [Kai Zhang](mailto:zhang.13253@osu.edu) ## Models - Dreamer-7B: - [General](https://huggingface.co/osunlp/Dreamer-7B) - [In-Domain-VWA-Shopping](https://huggingface.co/osunlp/Dreamer-7B-Shopping) - [In-Domain-VWA-Classifieds](https://huggingface.co/osunlp/Dreamer-7B-Classifieds) - [In-Domain-VWA-Reddit](https://huggingface.co/osunlp/Dreamer-7B-Reddit) ## Data: [Dreamer Training Data](https://huggingface.co/datasets/osunlp/Dreamer-V1-Data) ``` root |-- prompt: string |-- image: binary |-- response: string |-- action: string ``` ## Results ### Strong performance on VisualWebArena and Mind2Web-live | Benchmark | Method | Success Rate | |------------------|-----------------|--------------------| | **VisualWebArena** | GPT-4o + Reactive | 17.6% | | | GPT-4o + Tree Search | 26.2% | | | **GPT-4o + WebDreamer** | 23.6% (↑34.1%) | | **Online-Mind2Web** | GPT-4o + Reactive | 26.0% | | | **GPT-4o + WebDreamer** | 37.0% (↑42.3%) | | **Mind2Web-live** | GPT-4o + Reactive | 20.2% | | | **GPT-4o + WebDreamer** | 25.0% (↑23.8%) | Compared to the reactive baselines, WebDreamer significantly improves performance by 34.1%, 42.3%, and 23.8% on VisualWebArena, Online-Mind2Web, and Mind2Web-live, respectively. ### Better efficiency than tree search with true interactions image WebDreamer effectively explores the search space through simulations, which largely reduces the reliance on real-world interactions while maintaining robust performance. ## Inference ### vLLM server ```bash vllm serve osunlp/Dreamer-7B --api-key token-abc123 --dtype float16 ``` or ```bash python -m vllm.entrypoints.openai.api_server --served-model-name osunlp/Dreamer-7B --model osunlp/Dreamer-7B --dtype float16 ``` You can find more instruction about training and inference in [Qwen2-VL's Official Repo](https://github.com/QwenLM/Qwen2-VL). ### Prompt Actually our model is quite robust to textual prompt so feel free to try various prompts which we didn't heavily explore. ```python def format_openai_template(description: str, base64_image): return [ { "role": "user", "content": [ { "type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}, }, { "type": "text", "text": f""" Below is current screenshot. Please describe what you would see after a {action_description}""" }, ], }, ] messages = format_openai_template(description, base64_image) completion = await client.chat.completions.create( model=args.model_path, messages=messages, temperature=1.0 ) ``` ## Citation Information If you find this work useful, please consider citing our papers: ``` @article{Gu2024WebDreamer, author = {Yu Gu and Kai Zhang and Yuting Ning and Boyuan Zheng and Boyu Gou and Tianci Xue and Cheng Chang and Sanjari Srivastava and Yanan Xie and Peng Qi and Huan Sun and Yu Su}, title = {Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents}, journal = {CoRR}, volume = {abs/2411.06559}, year = {2024}, url = {https://arxiv.org/abs/2411.06559}, eprinttype= {arXiv}, eprint = {2411.06559}, } ```