bernhofer's picture
Update README.md
d760810 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: box
      dtype:
        array2_d:
          shape:
            - 1
            - 4
          dtype: float32
    - name: class
      dtype: string
    - name: test_action
      dtype: string
    - name: expectation
      dtype: string
    - name: conclusion
      dtype: string
    - name: language
      dtype: string
    - name: brand
      dtype: string
  splits:
    - name: test
      num_bytes: 10799037234.96
      num_examples: 4208
  download_size: 2543121896
  dataset_size: 10799037234.96
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - de
  - en
tags:
  - automotive
  - car
  - ui
  - gui
  - interface

AutomotiveUI-Bench-4K

Dataset Overview: 998 images and 4,208 annotations focusing on interaction with in-vehicle infotainment (IVI) systems. Key Features:

  • Serves as a validation benchmark for automotive UI.
  • Scope: Covers 15 automotive brands/OEMs, model years 2018-2025.
  • Image Source: Primarily photographs of IVI displays (due to screenshot limitations in most vehicles), with some direct screenshots (e.g., Android Auto).
  • Annotation Classes:
    • Test Action: Bounding box + imperative command in natural language.
    • Expected Result: Bounding box + expected outcome in natural lanugage + Pass/Fail status.
    • Bounding boxes are in format [[x0,y0,x1,y1]]
  • Languages:
    • IVI UI: German and English.
    • Annotations: English only (German UI text translated or quoted).
  • 15 Brands/OEMs:
    • VW: 170
    • Kia: 124
    • Audi: 91
    • Cupra: 85
    • Porsche: 78
    • Ford: 72
    • Maserati: 72
    • Mini: 60
    • BMW: 59
    • Peugot: 52
    • Tesla: 51
    • Toyota: 34
    • Opel: 30
    • Apple CarPlay: 13
    • Google Android Auto: 7

Usage

Corresponding model ELAM is available on Hugging Face as well.

Setup Environment for ELAM-7B
conda create -n elam python=3.10 -y
conda activate elam
pip install datasets==3.5.0 einops==0.8.1 torchvision==0.20.1 accelerate==1.6.0
pip install transformers==4.48.2
Dataloading and Inference with ELAM-7B
# Run inference on AutomotiveUI-4k dataset on local GPU
# Outputs will be written in a JSONL file
import json
import os
import time

import torch
from datasets import Dataset, load_dataset
from tqdm import tqdm
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig


def preprocess_prompt_elam(user_request: str, label_class: str) -> str:
    """Apply ELAM prompt template depending on class."""
    if label_class == "Expected Result":
        return f"Evaluate this statement about the image:\n'{user_request}'\nThink step by step, conclude whether the evaluation is 'PASSED' or 'FAILED' and point to the UI element that corresponds to this evaluation."
    elif label_class == "Test Action":
        return f"Identify and point to the UI element that corresponds to this test action:\n{user_request}"
    else:
        raise ValueError()


def append_to_jsonl_file(data: dict, target_path: str) -> None:
    assert str(target_path).endswith(".jsonl")
    with open(target_path, "a", encoding="utf-8") as file:
        file.write(f"{json.dumps(data, ensure_ascii=False)}\n")


def run_inference(dataset: Dataset, model: AutoModelForCausalLM, processor: AutoProcessor):
    # Define output dir and file
    timestamp = time.strftime("%Y%m%d-%H%M%S")
    DEBUG_DIR = os.path.join("eval_output", timestamp)
    model_outputs_path = os.path.join(DEBUG_DIR, f"model_outputs.jsonl")

    print(f"Writing data to: {model_outputs_path}")
    for sample_id, sample in enumerate(tqdm(dataset, desc="Processing")):
        image = sample["image"]

        gt_box = sample["box"][0]
        label_class = sample["class"]

        # read gt box
        utterance = None
        gt_status = None
        if "Expected Result" == label_class:
            utterance = sample["expectation"]
            gt_status = sample["conclusion"].upper()

        elif "Test Action" == label_class:
            utterance = sample["test_action"]
        else:
            raise ValueError(f"Did not find valid utterance for image #{sample_id}.")
        assert utterance

        # Apply prompt template
        rephrased_utterance = preprocess_prompt_elam(utterance, label_class)

        # Process the image and text
        inputs = processor.process(
            images=[image],
            text=rephrased_utterance,
        )

        # Move inputs to the correct device and make a batch of size 1, cast to bfloat16
        inputs_bfloat16 = {}
        for k, v in inputs.items():
            if v.dtype == torch.float32:
                inputs_bfloat16[k] = v.to(model.device).to(torch.bfloat16).unsqueeze(0)
            else:
                inputs_bfloat16[k] = v.to(model.device).unsqueeze(0)

        inputs = inputs_bfloat16  # Replace original inputs with the correctly typed inputs

        # Generate output
        output = model.generate_from_batch(
            inputs, GenerationConfig(max_new_tokens=2048, stop_strings="<|endoftext|>"), tokenizer=processor.tokenizer
        )

        # Only get generated tokens; decode them to text
        generated_tokens = output[0, inputs["input_ids"].size(1) :]
        response = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)

        # write current image with current label
        os.makedirs(DEBUG_DIR, exist_ok=True)

        # append line to jsonl
        model_output_line = {
            "sample_id": sample_id,
            "input": rephrased_utterance,
            "output": response,
            "image_size": image.size,
            "gt_class": label_class,
            "gt_box": gt_box,
            "gt_status": gt_status,
            "language": sample["language"],
        }
        append_to_jsonl_file(model_output_line, target_path=model_outputs_path)


if __name__ == "__main__":
    # Set dataset
    dataset = load_dataset("sparks-solutions/AutomotiveUI-Bench-4K")["test"]

    # Load the processor
    model_name = "sparks-solutions/ELAM-7B"
    processor = AutoProcessor.from_pretrained(
        model_name, trust_remote_code=True, torch_dtype="bfloat16", device_map="auto"
    )

    # Load the model
    model = AutoModelForCausalLM.from_pretrained(
        model_name, trust_remote_code=True, torch_dtype="bfloat16", device_map="auto"
    )
    run_inference(dataset=dataset, processor=processor, model=model)
Parsing results and calculating metrics
import argparse
import json
import re
from pathlib import Path
from typing import Tuple

import numpy as np


def read_jsonl_file(path: str) -> list:
    assert str(path).endswith(".jsonl")
    data_list = []
    with open(path, "r", encoding="utf-8") as file:
        for line in file:
            data = json.loads(line)
            data_list.append(data)
    return data_list


def write_json_file(data: dict | list, path: str) -> None:
    assert str(path).endswith(".json")
    with open(path, "w", encoding="utf-8") as outfile:
        json.dump(data, outfile, ensure_ascii=False, indent=4)


def postprocess_response_elam(response: str) -> Tuple[float, float]:
    """Parse Molmo-style point coordinates from string."""
    pattern = r'<point x="(?P<x>\d+\.\d+)" y="(?P<y>\d+\.\d+)"'
    match = re.search(pattern, response)
    if match:
        x_coord_raw = float(match.group("x"))
        y_coord_raw = float(match.group("y"))
        x_coord = x_coord_raw / 100
        y_coord = y_coord_raw / 100
        return [x_coord, y_coord]
    else:
        return [-1, -1]


def pred_center_in_gt(predicted_boxes, ground_truth_boxes):
    """Calculate the percentage of predictions where the predicted center is in the ground truth box and return the indices where it is not.

    Args:
        predicted_boxes (np.ndarray): shape (n, 4) of top-left bottom-right boxes or predicted points
        ground_truth_boxes (np.ndarray): shape (n, 4) of top-left bottom-right boxes

    Returns:
        float: percentage of predictions where the predicted center is in the ground truth box
        list: indices of predictions where the center is not in the ground truth box
    """
    if ground_truth_boxes.size == 0:  # Check for empty numpy array just to be explicit
        return -1
    if predicted_boxes.shape[1] == 2:
        predicted_centers = predicted_boxes
    else:
        # Calculate the centers of the bounding boxes
        predicted_centers = (predicted_boxes[:, :2] + predicted_boxes[:, 2:]) / 2

    # Check if predicted centers are within ground truth boxes
    within_gt = (
        (predicted_centers[:, 0] >= ground_truth_boxes[:, 0])
        & (predicted_centers[:, 0] <= ground_truth_boxes[:, 2])
        & (predicted_centers[:, 1] >= ground_truth_boxes[:, 1])
        & (predicted_centers[:, 1] <= ground_truth_boxes[:, 3])
    )

    return within_gt


def to_mean_percent(metrics: list | np.ndarray) -> float:
    """Calculate mean of array and multiply by 100."""
    return np.mean(metrics) * 100


def calculate_alignment_numpy(array1, array2):
    """Returns boolean array where values are equal"""

    if array1.size == 0:  # Check for empty numpy array just to be explicit
        return [], [], []

    # Overall Accuracy
    overall_hits = array1 == array2

    # True Ground Truth Accuracy
    true_ground_truth_indices = array2 == True  # Boolean mask for True ground truth
    true_ground_truth_predictions = array1[true_ground_truth_indices]
    true_ground_truth_actuals = array2[true_ground_truth_indices]

    true_gt_hits = true_ground_truth_predictions == true_ground_truth_actuals

    # False Ground Truth Accuracy
    false_ground_truth_indices = array2 == False  # Boolean mask for False ground truth
    false_ground_truth_predictions = array1[false_ground_truth_indices]
    false_ground_truth_actuals = array2[false_ground_truth_indices]

    false_gt_hits = false_ground_truth_predictions == false_ground_truth_actuals
    return overall_hits, true_gt_hits, false_gt_hits


def clip_non_minus_one(arr):
    """Clips values in a NumPy array to [0, 1] but leaves -1 values unchanged."""
    # Create a boolean mask for values NOT equal to -1
    mask = arr != -1

    # Create a copy of the array to avoid modifying the original in-place
    clipped_arr = np.copy(arr)

    # Apply clipping ONLY to the elements where the mask is True
    clipped_arr[mask] = np.clip(clipped_arr[mask], 0, 1)

    return clipped_arr


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Run model inference and save outputs.")
    parser.add_argument(
        "-m", "--model_output_path", type=str, help="Path to json that contains model outputs from eval.", required=True
    )

    args = parser.parse_args()

    EVAL_PATH = args.model_output_path
    eval_jsonl_data = read_jsonl_file(EVAL_PATH)

    ta_pred_bboxes, ta_gt_bboxes = [], []
    er_pred_bboxes, er_gt_bboxes = [], []
    er_pred_conclusion, er_gt_conclusion = [], []
    ta_out_images, er_out_images = [], []
    failed_pred_responses = []

    er_en_ids = []
    ta_en_ids = []
    ta_de_ids = []
    er_de_ids = []

    for line in eval_jsonl_data:
        # Read data from line
        image_width, image_height = line["image_size"]
        gt_box = line["gt_box"]
        lang = line["language"]
        response_raw = line["output"]

        if "Test Action" == line["gt_class"]:
            # Parse point/box from response and clip to image
            parsed_response = postprocess_response_elam(response_raw)
            if parsed_response[0] == -1:
                failed_pred_responses.append({"sample_id": line["sample_id"], "response": response_raw})

            parsed_response = np.array(parsed_response)
            parsed_response = clip_non_minus_one(parsed_response).tolist()

            # Append results
            ta_gt_bboxes.append(gt_box)
            ta_pred_bboxes.append(parsed_response)
            if lang == "DE":
                ta_de_ids.append(len(ta_pred_bboxes) - 1)  # append id
            elif lang == "EN":
                ta_en_ids.append(len(ta_pred_bboxes) - 1)

        elif "Expected Result" in line["gt_class"]:
            er_gt_bboxes.append(gt_box)

            # Parse point/box from response and clip to image
            parsed_response = postprocess_response_elam(response_raw)
            if parsed_response[0] == -1:
                failed_pred_responses.append({"sample_id": line["sample_id"], "response": response_raw})
            parsed_response = np.array(parsed_response)
            parsed_response = clip_non_minus_one(parsed_response).tolist()
            er_pred_bboxes.append(parsed_response)

            # Read evaluation conclusion
            gt_conclusion = line["gt_status"].upper()
            gt_conclusion = True if gt_conclusion == "PASSED" else False

            pred_conclusion = None
            if "FAILED" in response_raw or "is not met" in response_raw:
                pred_conclusion = False
            elif "PASSED" in response_raw or "is met" in response_raw:
                pred_conclusion = True
            if pred_conclusion is None:
                # Make prediction wrong if it couldn't be parsed
                pred_conclusion = not gt_conclusion

            er_gt_conclusion.append(gt_conclusion)
            er_pred_conclusion.append(pred_conclusion)

            if lang == "DE":
                er_de_ids.append(len(er_pred_bboxes) - 1)
            elif lang == "EN":
                er_en_ids.append(len(er_pred_bboxes) - 1)

    ta_pred_bboxes = np.array(ta_pred_bboxes)
    ta_gt_bboxes = np.array(ta_gt_bboxes)
    er_pred_bboxes = np.array(er_pred_bboxes)
    er_gt_bboxes = np.array(er_gt_bboxes)
    er_pred_conclusion = np.array(er_pred_conclusion)
    er_gt_conclusion = np.array(er_gt_conclusion)
    print(f"{'Test action (pred/gt):':<{36}}{ta_pred_bboxes.shape}, {ta_gt_bboxes.shape}")
    print(f"{'Expected results (pred/gt):':<{36}}{er_pred_bboxes.shape}, {er_gt_bboxes.shape}")

    # Calculate metrics
    ta_pred_hits = pred_center_in_gt(ta_pred_bboxes, ta_gt_bboxes)
    score_ta = to_mean_percent(ta_pred_hits)

    er_pred_hits = pred_center_in_gt(er_pred_bboxes, er_gt_bboxes)
    score_er = to_mean_percent(er_pred_hits)

    overall_hits, true_gt_hits, false_gt_hits = calculate_alignment_numpy(er_pred_conclusion, er_gt_conclusion)
    score_conclusion = to_mean_percent(overall_hits)
    score_conclusion_gt_true = to_mean_percent(true_gt_hits)
    score_conclusion_gt_false = to_mean_percent(false_gt_hits)

    # Calculate language-specific metrics for TA
    score_ta_en = to_mean_percent(ta_pred_hits[ta_en_ids])
    score_ta_de = to_mean_percent(ta_pred_hits[ta_de_ids])

    # Calculate language-specific metrics for ER (bbox)
    score_er_en = to_mean_percent(er_pred_hits[er_en_ids])
    score_er_de = to_mean_percent(er_pred_hits[er_de_ids])

    # Calculate language-specific metrics for ER (conclusion)
    score_conclusion_en = to_mean_percent(overall_hits[er_en_ids])
    score_conclusion_de = to_mean_percent(overall_hits[er_de_ids])

    print(f"\n{'Test action visual grounding:':<{36}}{score_ta:.1f}")
    print(f"{'Expected result visual grounding:':<{36}}{score_er:.1f}")
    print(f"{'Expected result evaluation:':<{36}}{score_conclusion:.1f}\n")

    eval_out_path = Path(EVAL_PATH).parent / "eval_results.json"

    write_json_file(
        {
            "score_ta": score_ta,
            "score_ta_de": score_ta_de,
            "score_ta_en": score_ta_en,
            "score_er": score_er,
            "score_er_de": score_er_de,
            "score_er_en": score_er_en,
            "score_er_conclusion": score_conclusion,
            "score_er_conclusion_de": score_conclusion_de,
            "score_er_conclusion_en": score_conclusion_en,
            "score_conclusion_gt_true": score_conclusion_gt_true,
            "score_conclusion_gt_false": score_conclusion_gt_false,
        },
        path=eval_out_path,
    )
    print(f"Stored results at {eval_out_path}")

    if failed_pred_responses:
        failed_responses_out_path = Path(EVAL_PATH).parent / "failed_responses.json"
        write_json_file(failed_pred_responses, failed_responses_out_path)
        print(f"Stored non-parsable responses at {failed_responses_out_path}")

Results

Model Test Action Grounding Expected Result Grounding Expected Result Evaluation
InternVL2.5-8B 26.6 5.7 64.8
TinyClick 61.0 54.6 -
UGround-V1-7B (Qwen2-VL) 69.4 55.0 -
Molmo-7B-D-0924 71.3 71.4 66.9
LAM-270M (TinyClick) 73.9 59.9 -
ELAM-7B (Molmo) 87.6 77.5 78.2

Citation

If you find ELAM or AutomotiveUI-Bench-4K useful in your research, please cite the following paper:

@misc{ernhofer2025leveragingvisionlanguagemodelsvisual,
      title={Leveraging Vision-Language Models for Visual Grounding and Analysis of Automotive UI}, 
      author={Benjamin Raphael Ernhofer and Daniil Prokhorov and Jannica Langner and Dominik Bollmann},
      year={2025},
      eprint={2505.05895},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.05895}, 
}

Acknowledgements

Funding

This work was supported by German BMBF within the scope of project "KI4BoardNet".