showdown-clicks / README.md
peiyuan-generalagents's picture
internal changes
a6dfa64
metadata
license: mit
task_categories:
  - image-to-text
language:
  - en
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: dev
        path: showdown-clicks-dev/data.csv

showdown-clicks

General Agents

🤗 Dataset | GitHub

showdown is a suite of offline and online benchmarks for computer-use agents.

showdown-clicks is a collection of 5,679 left clicks of humans performing various tasks in a macOS desktop environment. It is intended to evaluate instruction-following and low-level control capabilities of computer-use agents.

As of March 2025, we are releasing a subset of the full set, showdown-clicks-dev, containing 557 clicks. All examples are annotated with the bounding box of viable click locations for the UI element.

The episodes range from tens of seconds to minutes, and screenshots are between WXGA (1280×800) and WSXGA+ (1680×1050). The recordings contain no PII and were collected in late 2024.

Column Description
id Unique identifier for each data entry (alphanumeric string)
image Path to the screenshot image file showing the UI state
instruction Natural language instruction describing the task to be performed
x1 Top-left x-coordinate of the bounding box
y1 Top-left y-coordinate of the bounding box
x2 Bottom-right x-coordinate of the bounding box
y2 Bottom-right y-coordinate of the bounding box
width Width of the image
height Height of the image

showdown-clicks-dev Results

Model Accuracy 95% CI Latency [^1] 95% CI
ace-control-medium 77.56% +3.41%/-3.59% 533ms +8ms/-7ms
ace-control-small 72.89% +3.59%/-3.77% 324ms +7ms/-7ms
Operator (OpenAI CUA, macOS) 64.27% +3.95%/-3.95% 6385ms +182ms/-177ms
Molmo-72B-0924 54.76% +4.13%/-4.13% 6599ms +113ms/-114ms
Claude 3.7 Sonnet (Thinking, Computer Use) 53.68% +4.13%/-4.13% 9656ms +95ms/-97ms
UI-TARS-72B-SFT 54.4% +4.13%/-4.13% 1977ms +15ms/-16ms
OmniParser V2 + GPT-4o 51.71% +4.12%/-4.13% 12642ms +361ms/-349ms
Gemini 2.0 Flash 33.39% +3.95%/-3.95% 3069ms +16ms/-16ms
Qwen2.5-VL-72B-Instruct 24.78% +3.59%/-3.60% 3790ms +57ms/-55ms
GPT-4o 5.21% +1.97%/-1.80% 2500ms +49ms/-48ms

Metrics

The evaluation script calculates the percentage of correct predictions (within the bounding box), with 95% confidence intervals created from bootstrapping.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer

The images used in this evaluation dataset may contain content that some users might find offensive, inappropriate, or objectionable. These images are included solely for the purpose of evaluating model performance on realistic computer use scenarios.

We do not endorse, approve of, or claim responsibility for any content displayed in these images. The inclusion of any image in this dataset does not represent our views or opinions, and is not intended to promote any particular content, website, or viewpoint.

Researchers and users of this evaluation framework should be aware of this possibility when reviewing results and visualizations.

Citation

If you use showdown-clicks in your research, please cite it as follows:

@misc{showdown2025,
  title={The Showdown Computer Control Evaluation Suite},
  author={General Agents Team},
  year={2025},
  url={https://github.com/generalagents/showdown},
}

[^1]: Latency values vary significantly by provider, demand, computational resources, geographical location, and other factors - most of which are opaque to us for models we don't have direct access to. Ace models are served via General Agent's API; Qwen, Claude, Gemini, and OpenAI models utilize their respective first-party APIs; while Molmo, UI-TARS, and OmniParser models are served through Modal.