Dataset Viewer
Auto-converted to Parquet
image
imagewidth (px)
800
5.47k
box
array 2D
class
stringclasses
2 values
test_action
stringlengths
4
116
expectation
stringlengths
3
231
conclusion
stringclasses
2 values
language
stringclasses
2 values
brand
stringclasses
15 values
[ [ 0.2269223928451538, 0.6440301537513733, 0.44112399220466614, 0.6885950565338135 ] ]
Expected Result
null
Linksverkehr option is chosen
failed
DE
Mini
[ [ 0.22903656959533691, 0.5710646510124207, 0.5898305177688599, 0.612537682056427 ] ]
Expected Result
null
Rechtsverkehr option is chosen
passed
DE
Mini
[ [ 0.22975021600723267, 0.6486144661903381, 0.2586529850959778, 0.6816723346710205 ] ]
Test Action
Choose Linksverkehr radio button
null
null
DE
Mini
[ [ 0.5761604309082031, 0.5120443105697632, 0.6238211989402771, 0.6356486082077026 ] ]
Expected Result
null
The checkbox for update notifications is not selected.
passed
DE
Porsche
[ [ 0.5776485204696655, 0.3505582809448242, 0.6208512187004089, 0.46817734837532043 ] ]
Test Action
Check the box next to service information.
null
null
DE
Porsche
[ [ 0.07132884114980698, 0.16893267631530762, 0.6305113434791565, 0.8051067590713501 ] ]
Expected Result
null
Only Smart service is selected.
passed
DE
Porsche
[ [ 0.5784751772880554, 0.19285714626312256, 0.6193509101867676, 0.3007389307022095 ] ]
Test Action
Select the checkbox to view all the vehicle notifications.
null
null
DE
Porsche
[ [ 0.2832738757133484, 0.24378220736980438, 0.5539161562919617, 0.2930627167224884 ] ]
Expected Result
null
The charging submenu is shown
failed
DE
Mini
[ [ 0.703528106212616, 0.40205639600753784, 0.7475334405899048, 0.4357462227344513 ] ]
Expected Result
null
Auto Zoom is disabled
passed
DE
Mini
[ [ 0.3532560169696808, 0.9034953713417053, 0.39176180958747864, 0.9467816948890686 ] ]
Test Action
Go to climate menu
null
null
DE
Mini
[ [ 0.7061552405357361, 0.6864657402038574, 0.7481935620307922, 0.7163830995559692 ] ]
Test Action
Avoid ferries in route options
null
null
DE
Mini
[ [ 0.347791463136673, 0.5106662511825562, 0.656685471534729, 0.5845996737480164 ] ]
Expected Result
null
Contacts are listed
failed
EN
Volkswagen
[ [ 0.4844910204410553, 0.02820572815835476, 0.891560435295105, 0.13347165286540985 ] ]
Expected Result
null
Contacts are displayed
passed
EN
Volkswagen
[ [ 0.9043994545936584, 0.03322033956646919, 0.9640648365020752, 0.13096435368061066 ] ]
Test Action
go back
null
null
EN
Volkswagen
[ [ 0.12115885317325592, 0.4119351804256439, 0.398604154586792, 0.5023101568222046 ] ]
Expected Result
null
the remainder signal for mobile phone is set to 'Sound'
failed
EN
Audi
[ [ 0.39775779843330383, 0.4135879576206207, 0.6845052242279053, 0.5068240761756897 ] ]
Expected Result
null
The remainder signal for mobile phone is set to 'Spoken'
passed
EN
Audi
[ [ 0.6819687485694885, 0.4150925874710083, 0.9636406302452087, 0.5038148164749146 ] ]
Test Action
deactivate the reminder signal for mobile phone
null
null
EN
Audi
[ [ 0.9145807027816772, 0.7549444437026978, 0.9619479179382324, 0.8045694231987 ] ]
Test Action
Deactivate wireless mobile phone charging
null
null
EN
Audi
[ [ 0.48294907808303833, 0.6427265405654907, 0.5819629430770874, 0.673184871673584 ] ]
Expected Result
null
cross wind warnings are active
failed
EN
Ford
[ [ 0.5982083082199097, 0.2642630338668823, 0.6662777662277222, 0.2999427020549774 ] ]
Test Action
Show Informations about the Border crossing notifications
null
null
EN
Ford
[ [ 0.4783101975917816, 0.21283333003520966, 0.5804166793823242, 0.244161456823349 ] ]
Test Action
Disable notifications for red lights and speed cameras
null
null
EN
Ford
[ [ 0.48140278458595276, 0.5339479446411133, 0.5819629430770874, 0.5670156478881836 ] ]
Expected Result
null
Notifications for bends are disabled
passed
EN
Ford
[ [ 0.2237546294927597, 0.009320312179625034, 0.29282405972480774, 0.0494270846247673 ] ]
Test Action
Show surroundings
null
null
EN
Ford
[ [ 0.7719629406929016, 0.8508958220481873, 0.8412731289863586, 0.9015781283378601 ] ]
Test Action
Change ventilation and windscreeen heating
null
null
EN
Ford
[ [ 0.1626666635274887, 0.9584583044052124, 0.20295371115207672, 0.9891771078109741 ] ]
Test Action
turn steering wheel heating on
null
null
EN
Ford
[ [ 0.9308333396911621, 0.9035651087760925, 0.9684351682662964, 0.9191770553588867 ] ]
Test Action
Decrease passenger's side climate temperate
null
null
EN
Ford
[ [ 0.039972223341464996, 0.7226718664169312, 0.3351527750492096, 0.7968150973320007 ] ]
Test Action
Connect mobile phone
null
null
EN
Ford
[ [ 0.6637546420097351, 0.7258046865463257, 0.9515092372894287, 0.7999479174613953 ] ]
Test Action
Connect a multimedia device
null
null
EN
Ford
[ [ 0.867370069026947, 0.40481048822402954, 0.9007148742675781, 0.44239553809165955 ] ]
Test Action
deactivate the passing of cars over the right side
null
null
DE
BMW
[ [ 0.10271432250738144, 0.085213802754879, 0.366665780544281, 0.3558892011642456 ] ]
Test Action
select the minimum sensitvity of the distance control
null
null
DE
BMW
[ [ 0.09321814775466919, 0.018833819776773453, 0.35172513127326965, 0.08399903029203415 ] ]
Expected Result
null
Settings of distance control are beeing displayed
passed
DE
BMW
[ [ 0.6394855976104736, 0.07773080468177795, 0.9034344553947449, 0.34841108322143555 ] ]
Test Action
select the maximum sensitvity of the distance control
null
null
DE
BMW
[ [ 0.3734951317310333, 0.07773080468177795, 0.6320021152496338, 0.34715744853019714 ] ]
Expected Result
null
The medium sensitivity of the distance control was selected
passed
DE
BMW
[ [ 0.6514718532562256, 0.6334737539291382, 0.717703104019165, 0.6773369312286377 ] ]
Test Action
set audio quality to low
null
null
DE
Cupra
[ [ 0.6479475498199463, 0.535729169845581, 0.7000890970230103, 0.5708197355270386 ] ]
Expected Result
null
Audio quality is set to low
failed
DE
Cupra
[ [ 0.05609371140599251, 0.19738677144050598, 0.11809778213500977, 0.2800905704498291 ] ]
Test Action
Open favourites
null
null
DE
Cupra
[ [ 0.18645276129245758, 0.7299818992614746, 0.23435446619987488, 0.8189402222633362 ] ]
Test Action
Open navigation
null
null
DE
Cupra
[ [ 0.8579170107841492, 0.7249547243118286, 0.9255564212799072, 0.8051539659500122 ] ]
Test Action
Open car menu
null
null
DE
Cupra
[ [ 0.4116041660308838, 0.7556250095367432, 0.5845025777816772, 0.8075463175773621 ] ]
Test Action
go to the service menu
null
null
DE
Tesla
[ [ 0.39992186427116394, 0.1304953694343567, 0.6043619513511658, 0.9134629368782043 ] ]
Expected Result
null
scroll area to select different functions is displayed
passed
DE
Tesla
[ [ 0.6440807580947876, 0.9363101720809937, 0.6697812676429749, 0.9840787053108215 ] ]
Test Action
go to the calendar app
null
null
DE
Tesla
[ [ 0.6078671813011169, 0.14503240585327148, 0.9723515510559082, 0.9196944236755371 ] ]
Expected Result
null
lighting menu is displayed
passed
DE
Tesla
[ [ 0.4022578001022339, 0.4378657341003418, 0.5868385434150696, 0.5022500157356262 ] ]
Expected Result
null
lighting menu is selected
passed
DE
Tesla
[ [ 0.4151093661785126, 0.3194907307624817, 0.5763280987739563, 0.37138888239860535 ] ]
Test Action
go to the autopilot menu
null
null
DE
Tesla
[ [ 0.08988969773054123, 0.215775266289711, 0.9803873896598816, 0.32040175795555115 ] ]
Expected Result
null
The navigation bar at the top consists of the following tabs: My Car, Leistung, Steuerung, Einstellungen
passed
DE
Maserati
[ [ 0.24746569991111755, 0.32042059302330017, 0.9367581605911255, 0.42883867025375366 ] ]
Expected Result
null
The option for the camera delay is set
passed
DE
Maserati
[ [ 0.4127172529697418, 0.34587571024894714, 0.4329190254211426, 0.3901883363723755 ] ]
Test Action
view additional information related to the "Surround View-Kamera Verzögerung"
null
null
DE
Maserati
[ [ 0.5977604389190674, 0.024125000461935997, 0.800179660320282, 0.13923148810863495 ] ]
Test Action
Open brightnes tab
null
null
EN
Volkswagen
[ [ 0.6123281121253967, 0.1579398214817047, 0.7313854098320007, 0.42556482553482056 ] ]
Test Action
Choose dual color
null
null
EN
Volkswagen
[ [ 0.7321146130561829, 0.8600972294807434, 0.8285078406333923, 0.9766435027122498 ] ]
Expected Result
null
The slider for Colour 2 is set to dark blue
passed
EN
Volkswagen
[ [ 0.13347916305065155, 0.13543517887592316, 0.346867173910141, 0.18631018698215485 ] ]
Expected Result
null
The color of the title "Smartphone-Interface" is green
passed
DE
Audi
[ [ 0.7326536178588867, 0.01916203647851944, 0.7624739408493042, 0.07750000059604645 ] ]
Test Action
Select music icon within the status bar at the top right
null
null
DE
Audi
[ [ 0.44578906893730164, 0.5153981447219849, 0.6587786674499512, 0.5745416879653931 ] ]
Expected Result
null
The system displays the name of the last connected mobile device
passed
DE
Audi
[ [ 0.8532734513282776, 0.02483796328306198, 0.8800234198570251, 0.07792592793703079 ] ]
Expected Result
null
The status bar doesn't show a wi-fi icon
failed
DE
Audi
[ [ 0.007258021738380194, 0.4078581631183624, 0.1614956259727478, 0.5453616976737976 ] ]
Expected Result
null
Near current position is the filter
passed
EN
Kia
[ [ 0.2341209203004837, 0.15691488981246948, 0.3690824806690216, 0.21535460650920868 ] ]
Expected Result
null
Parking opportunitys are displayed
passed
EN
Kia
[ [ 0.03232564404606819, 0.006560283713042736, 0.0756298080086708, 0.11219858378171921 ] ]
Test Action
go back
null
null
EN
Kia
[ [ 0.1257491409778595, 0.007723404094576836, 0.17703527212142944, 0.11390070617198944 ] ]
Test Action
go to home menu
null
null
EN
Kia
[ [ 0.16342614591121674, 0.4095744788646698, 0.19363299012184143, 0.5450354814529419 ] ]
Test Action
refresh the search near current position
null
null
EN
Kia
[ [ 0.20057278871536255, 0.25411346554756165, 0.6574065089225769, 0.39591488242149353 ] ]
Test Action
choose the first element in the list of the results
null
null
EN
Kia
[ [ 0.22091874480247498, 0.5046305656433105, 0.6196343898773193, 0.6505172252655029 ] ]
Expected Result
null
Ferries/ Car trains are selected
failed
EN
Porsche
[ [ 0.2240031212568283, 0.664696216583252, 0.6219468712806702, 0.8186863660812378 ] ]
Expected Result
null
Tunnel is not selected
passed
EN
Porsche
[ [ 0.5841562747955322, 0.3830623924732208, 0.6096093654632568, 0.44992610812187195 ] ]
Test Action
select Toll Roads
null
null
EN
Porsche
[ [ 0.2147500067949295, 0.1865270882844925, 0.6258031129837036, 0.3182266056537628 ] ]
Expected Result
null
Motorway is selected
failed
EN
Porsche
[ [ 0.004961446393281221, 0.8648650646209717, 0.6435229778289795, 0.9555113911628723 ] ]
Test Action
cancel the adding of a new device
null
null
DE
Kia
[ [ 0.6705158352851868, 0.858686089515686, 0.9805397391319275, 0.961690366268158 ] ]
Test Action
search for charging stations
null
null
DE
Kia
[ [ 0.2602313160896301, 0.7598082423210144, 0.31498804688453674, 0.8277840614318848 ] ]
Test Action
go to the next side
null
null
DE
Kia
[ [ 0.1908242553472519, 0.15828125178813934, 0.45997342467308044, 0.2118394821882248 ] ]
Expected Result
null
options to add a new device are beeing displayed
passed
DE
Kia
[ [ 0.6471119523048401, 0.7211620211601257, 0.6950677037239075, 0.8279212713241577 ] ]
Expected Result
null
The balance fader is pointing towards the right rear seat
passed
DE
Volkswagen
[ [ 0.009007812477648258, 0.12208333611488342, 0.06819531321525574, 0.16816666722297668 ] ]
Expected Result
null
The ambient temp is 22 degrees of Fahrenheit
failed
DE
Volkswagen
[ [ 0.016783853992819786, 0.4684722125530243, 0.07165104150772095, 0.5291481614112854 ] ]
Expected Result
null
There is no button to view "All Apps"
failed
DE
Volkswagen
[ [ 0.5745338797569275, 0.5591018795967102, 0.5900859236717224, 0.5944305658340454 ] ]
Test Action
Set the balance fader exacly to the middle
null
null
DE
Volkswagen
[ [ 0.35954374074935913, 0.025582922622561455, 0.49392813444137573, 0.16909688711166382 ] ]
Expected Result
null
Bose is selected
passed
EN
Porsche
[ [ 0.06712812185287476, 0.33288177847862244, 0.35375937819480896, 0.47977831959724426 ] ]
Test Action
open the menu for Bass and treble
null
null
EN
Porsche
[ [ 0.06905625015497208, 0.48821839690208435, 0.3550437390804291, 0.665509045124054 ] ]
Expected Result
null
Balance and fader is selected
passed
EN
Porsche
[ [ 0.5469436049461365, 0.17948099970817566, 0.5720082521438599, 0.24266812205314636 ] ]
Test Action
tap dropdown in search bar
null
null
EN
Maserati
[ [ 0.13494636118412018, 0.17722222208976746, 0.2911169230937958, 0.2489473670721054 ] ]
Expected Result
null
The Map shows petrol stations that are nearby
passed
EN
Maserati
[ [ 0.08803301304578781, 0.2950511574745178, 0.5848197937011719, 0.5068055391311646 ] ]
Test Action
select the first petrol station in the list
null
null
EN
Maserati
[ [ 0.0019147180719301105, 0.32237574458122253, 0.08353232592344284, 0.4982675313949585 ] ]
Expected Result
null
Navigation is selected
passed
EN
Maserati
[ [ 0.04504121094942093, 0.6316619515419006, 0.2321111410856247, 0.6899431943893433 ] ]
Expected Result
null
Anschlusstyp is CCS
passed
DE
Kia
[ [ 0.5985748767852783, 0.3024928867816925, 0.6373650431632996, 0.4125710129737854 ] ]
Expected Result
null
Pfalzwerke charging station is set as favorite
failed
DE
Kia
[ [ 0.5508986711502075, 0.3035724461078644, 0.5913028717041016, 0.4114985764026642 ] ]
Test Action
Initiate a phone call to Pfalzwerke
null
null
DE
Kia
[ [ 0.1667824238538742, 0.16284622251987457, 0.7625765800476074, 0.6812095046043396 ] ]
Expected Result
null
the infotainment settings menu is displayed
passed
EN
Cupra
[ [ 0.02555413730442524, 0.2720683515071869, 0.1070224717259407, 0.35818344354629517 ] ]
Test Action
go to the drive profile menu
null
null
EN
Cupra
[ [ 0.015441777184605598, 0.491303950548172, 0.11708375811576843, 0.587904691696167 ] ]
Test Action
go to background ligthing menu
null
null
EN
Cupra
[ [ 0.2999616861343384, 0.5619019865989685, 0.6457558870315552, 0.6544694304466248 ] ]
Expected Result
null
slider for infotainment brightness is visible
passed
EN
Cupra
[ [ 0.2293999046087265, 0.5586106181144714, 0.28080183267593384, 0.6511780619621277 ] ]
Test Action
decrease the infotainment brightness
null
null
EN
Cupra
[ [ 0.6663151383399963, 0.5606699585914612, 0.7177170515060425, 0.6511780619621277 ] ]
Test Action
increase the infotainment brightness
null
null
EN
Cupra
[ [ 0.9256540536880493, 0.010189739987254143, 1, 0.1083485558629036 ] ]
Expected Result
null
Statur bar shows black bluetooth icon
failed
EN
Kia
[ [ 0.009994661435484886, 0.12794096767902374, 0.07743726670742035, 0.2505832612514496 ] ]
Test Action
Open favourites settings
null
null
EN
Kia
[ [ 0.19576352834701538, 0.5801618695259094, 0.33383044600486755, 0.6378057599067688 ] ]
Expected Result
null
average speed is 28km/h
passed
EN
Cupra
[ [ 0.49965524673461914, 0.17665468156337738, 0.6142364740371704, 0.2355530560016632 ] ]
Test Action
Open Long-Term driving data statistics
null
null
EN
Cupra
[ [ 0.47759193181991577, 0.3708902895450592, 0.6099668145179749, 0.4811645746231079 ] ]
Expected Result
null
Petrol should last for more than 600 km
passed
EN
Cupra
[ [ 0.478304386138916, 0.49620503187179565, 0.5864810943603516, 0.6002113223075867 ] ]
Expected Result
null
Remaining battery is sufficient for more than 20 km
failed
EN
Cupra
[ [ 0.19078141450881958, 0.4410521686077118, 0.41923391819000244, 0.5563533902168274 ] ]
Expected Result
null
Last trip was performed on petrol only
failed
EN
Cupra
[ [ 0.010227272287011147, 0.1488354355096817, 0.12196118384599686, 0.25660520792007446 ] ]
Expected Result
null
Driving Data is selected
passed
EN
Cupra
[ [ 0.2664351463317871, 0.15885791182518005, 0.45574310421943665, 0.2515917122364044 ] ]
Expected Result
null
driving data since the start is displayed
passed
EN
Cupra
[ [ 0.03584780544042587, 0.7139973044395447, 0.09349335730075836, 0.8155035972595215 ] ]
Test Action
go to home menu
null
null
EN
Cupra
[ [ 0.010227272287011147, 0.2591097056865692, 0.1226736456155777, 0.3656249940395355 ] ]
Expected Result
null
Vehicle status is selected
failed
EN
Cupra
[ [ 0.4571654796600342, 0.15760791301727295, 0.6514581441879272, 0.25284621119499207 ] ]
Test Action
open long term driving data
null
null
EN
Cupra
End of preview. Expand in Data Studio

AutomotiveUI-Bench-4K

Dataset Overview: 998 images and 4,208 annotations focusing on interaction with in-vehicle infotainment (IVI) systems. Key Features:

  • Serves as a validation benchmark for automotive UI.
  • Scope: Covers 15 automotive brands/OEMs, model years 2018-2025.
  • Image Source: Primarily photographs of IVI displays (due to screenshot limitations in most vehicles), with some direct screenshots (e.g., Android Auto).
  • Annotation Classes:
    • Test Action: Bounding box + imperative command in natural language.
    • Expected Result: Bounding box + expected outcome in natural lanugage + Pass/Fail status.
  • Languages:
    • IVI UI: German and English.
    • Annotations: English only (German UI text translated or quoted).
  • 15 Brands/OEMs:
    • VW: 170
    • Kia: 124
    • Audi: 91
    • Cupra: 85
    • Porsche: 78
    • Ford: 72
    • Maserati: 72
    • Mini: 60
    • BMW: 59
    • Peugot: 52
    • Tesla: 51
    • Toyota: 34
    • Opel: 30
    • Apple CarPlay: 13
    • Google Android Auto: 7

Usage

Corresponding model ELAM is available on Hugging Face as well.

Setup Environment for ELAM-7B
conda create -n elam python=3.10 -y
conda activate elam
pip install datasets==3.5.0 einops==0.8.1 torchvision==0.20.1 accelerate==1.6.0
pip install transformers==4.48.2
Dataloading and Inference with ELAM-7B
# Run inference on AutomotiveUI-4k dataset on local GPU
# Outputs will be written in a JSONL file
import json
import os
import time

import torch
from datasets import Dataset, load_dataset
from tqdm import tqdm
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig


def preprocess_prompt_elam(user_request: str, label_class: str) -> str:
    """Apply ELAM prompt template depending on class."""
    if label_class == "Expected Result":
        return f"Evaluate this statement about the image:\n'{user_request}'\nThink step by step, conclude whether the evaluation is 'PASSED' or 'FAILED' and point to the UI element that corresponds to this evaluation."
    elif label_class == "Test Action":
        return f"Identify and point to the UI element that corresponds to this test action:\n{user_request}"
    else:
        raise ValueError()


def append_to_jsonl_file(data: dict, target_path: str) -> None:
    assert str(target_path).endswith(".jsonl")
    with open(target_path, "a", encoding="utf-8") as file:
        file.write(f"{json.dumps(data, ensure_ascii=False)}\n")


def run_inference(dataset: Dataset, model: AutoModelForCausalLM, processor: AutoProcessor):
    # Define output dir and file
    timestamp = time.strftime("%Y%m%d-%H%M%S")
    DEBUG_DIR = os.path.join("eval_output", timestamp)
    model_outputs_path = os.path.join(DEBUG_DIR, f"model_outputs.jsonl")

    print(f"Writing data to: {model_outputs_path}")
    for sample_id, sample in enumerate(tqdm(dataset, desc="Processing")):
        image = sample["image"]

        gt_box = sample["box"][0]
        label_class = sample["class"]

        # read gt box
        utterance = None
        gt_status = None
        if "Expected Result" == label_class:
            utterance = sample["expectation"]
            gt_status = sample["conclusion"].upper()

        elif "Test Action" == label_class:
            utterance = sample["test_action"]
        else:
            raise ValueError(f"Did not find valid utterance for image #{sample_id}.")
        assert utterance

        # Apply prompt template
        rephrased_utterance = preprocess_prompt_elam(utterance, label_class)

        # Process the image and text
        inputs = processor.process(
            images=[image],
            text=rephrased_utterance,
        )

        # Move inputs to the correct device and make a batch of size 1, cast to bfloat16
        inputs_bfloat16 = {}
        for k, v in inputs.items():
            if v.dtype == torch.float32:
                inputs_bfloat16[k] = v.to(model.device).to(torch.bfloat16).unsqueeze(0)
            else:
                inputs_bfloat16[k] = v.to(model.device).unsqueeze(0)

        inputs = inputs_bfloat16  # Replace original inputs with the correctly typed inputs

        # Generate output
        output = model.generate_from_batch(
            inputs, GenerationConfig(max_new_tokens=2048, stop_strings="<|endoftext|>"), tokenizer=processor.tokenizer
        )

        # Only get generated tokens; decode them to text
        generated_tokens = output[0, inputs["input_ids"].size(1) :]
        response = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)

        # write current image with current label
        os.makedirs(DEBUG_DIR, exist_ok=True)

        # append line to jsonl
        model_output_line = {
            "sample_id": sample_id,
            "input": rephrased_utterance,
            "output": response,
            "image_size": image.size,
            "gt_class": label_class,
            "gt_box": gt_box,
            "gt_status": gt_status,
            "language": sample["language"],
        }
        append_to_jsonl_file(model_output_line, target_path=model_outputs_path)


if __name__ == "__main__":
    # Set dataset
    dataset = load_dataset("sparks-solutions/AutomotiveUI-Bench-4K")["test"]

    # Load the processor
    model_name = "sparks-solutions/ELAM-7B"
    processor = AutoProcessor.from_pretrained(
        model_name, trust_remote_code=True, torch_dtype="bfloat16", device_map="auto"
    )

    # Load the model
    model = AutoModelForCausalLM.from_pretrained(
        model_name, trust_remote_code=True, torch_dtype="bfloat16", device_map="auto"
    )
    run_inference(dataset=dataset, processor=processor, model=model)
Parsing results and calculating metrics
import argparse
import json
import re
from pathlib import Path
from typing import Tuple

import numpy as np


def read_jsonl_file(path: str) -> list:
    assert str(path).endswith(".jsonl")
    data_list = []
    with open(path, "r", encoding="utf-8") as file:
        for line in file:
            data = json.loads(line)
            data_list.append(data)
    return data_list


def write_json_file(data: dict | list, path: str) -> None:
    assert str(path).endswith(".json")
    with open(path, "w", encoding="utf-8") as outfile:
        json.dump(data, outfile, ensure_ascii=False, indent=4)


def postprocess_response_elam(response: str) -> Tuple[float, float]:
    """Parse Molmo-style point coordinates from string."""
    pattern = r'<point x="(?P<x>\d+\.\d+)" y="(?P<y>\d+\.\d+)"'
    match = re.search(pattern, response)
    if match:
        x_coord_raw = float(match.group("x"))
        y_coord_raw = float(match.group("y"))
        x_coord = x_coord_raw / 100
        y_coord = y_coord_raw / 100
        return [x_coord, y_coord]
    else:
        return [-1, -1]


def pred_center_in_gt(predicted_boxes, ground_truth_boxes):
    """Calculate the percentage of predictions where the predicted center is in the ground truth box and return the indices where it is not.

    Args:
        predicted_boxes (np.ndarray): shape (n, 4) of top-left bottom-right boxes or predicted points
        ground_truth_boxes (np.ndarray): shape (n, 4) of top-left bottom-right boxes

    Returns:
        float: percentage of predictions where the predicted center is in the ground truth box
        list: indices of predictions where the center is not in the ground truth box
    """
    if ground_truth_boxes.size == 0:  # Check for empty numpy array just to be explicit
        return -1
    if predicted_boxes.shape[1] == 2:
        predicted_centers = predicted_boxes
    else:
        # Calculate the centers of the bounding boxes
        predicted_centers = (predicted_boxes[:, :2] + predicted_boxes[:, 2:]) / 2

    # Check if predicted centers are within ground truth boxes
    within_gt = (
        (predicted_centers[:, 0] >= ground_truth_boxes[:, 0])
        & (predicted_centers[:, 0] <= ground_truth_boxes[:, 2])
        & (predicted_centers[:, 1] >= ground_truth_boxes[:, 1])
        & (predicted_centers[:, 1] <= ground_truth_boxes[:, 3])
    )

    return within_gt


def to_mean_percent(metrics: list | np.ndarray) -> float:
    """Calculate mean of array and multiply by 100."""
    return np.mean(metrics) * 100


def calculate_alignment_numpy(array1, array2):
    """Returns boolean array where values are equal"""

    if array1.size == 0:  # Check for empty numpy array just to be explicit
        return [], [], []

    # Overall Accuracy
    overall_hits = array1 == array2

    # True Ground Truth Accuracy
    true_ground_truth_indices = array2 == True  # Boolean mask for True ground truth
    true_ground_truth_predictions = array1[true_ground_truth_indices]
    true_ground_truth_actuals = array2[true_ground_truth_indices]

    true_gt_hits = true_ground_truth_predictions == true_ground_truth_actuals

    # False Ground Truth Accuracy
    false_ground_truth_indices = array2 == False  # Boolean mask for False ground truth
    false_ground_truth_predictions = array1[false_ground_truth_indices]
    false_ground_truth_actuals = array2[false_ground_truth_indices]

    false_gt_hits = false_ground_truth_predictions == false_ground_truth_actuals
    return overall_hits, true_gt_hits, false_gt_hits


def clip_non_minus_one(arr):
    """Clips values in a NumPy array to [0, 1] but leaves -1 values unchanged."""
    # Create a boolean mask for values NOT equal to -1
    mask = arr != -1

    # Create a copy of the array to avoid modifying the original in-place
    clipped_arr = np.copy(arr)

    # Apply clipping ONLY to the elements where the mask is True
    clipped_arr[mask] = np.clip(clipped_arr[mask], 0, 1)

    return clipped_arr


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Run model inference and save outputs.")
    parser.add_argument(
        "-m", "--model_output_path", type=str, help="Path to json that contains model outputs from eval.", required=True
    )

    args = parser.parse_args()

    EVAL_PATH = args.model_output_path
    eval_jsonl_data = read_jsonl_file(EVAL_PATH)

    ta_pred_bboxes, ta_gt_bboxes = [], []
    er_pred_bboxes, er_gt_bboxes = [], []
    er_pred_conclusion, er_gt_conclusion = [], []
    ta_out_images, er_out_images = [], []
    failed_pred_responses = []

    er_en_ids = []
    ta_en_ids = []
    ta_de_ids = []
    er_de_ids = []

    for line in eval_jsonl_data:
        # Read data from line
        image_width, image_height = line["image_size"]
        gt_box = line["gt_box"]
        lang = line["language"]
        response_raw = line["output"]

        if "Test Action" == line["gt_class"]:
            # Parse point/box from response and clip to image
            parsed_response = postprocess_response_elam(response_raw)
            if parsed_response[0] == -1:
                failed_pred_responses.append({"sample_id": line["sample_id"], "response": response_raw})

            parsed_response = np.array(parsed_response)
            parsed_response = clip_non_minus_one(parsed_response).tolist()

            # Append results
            ta_gt_bboxes.append(gt_box)
            ta_pred_bboxes.append(parsed_response)
            if lang == "DE":
                ta_de_ids.append(len(ta_pred_bboxes) - 1)  # append id
            elif lang == "EN":
                ta_en_ids.append(len(ta_pred_bboxes) - 1)

        elif "Expected Result" in line["gt_class"]:
            er_gt_bboxes.append(gt_box)

            # Parse point/box from response and clip to image
            parsed_response = postprocess_response_elam(response_raw)
            if parsed_response[0] == -1:
                failed_pred_responses.append({"sample_id": line["sample_id"], "response": response_raw})
            parsed_response = np.array(parsed_response)
            parsed_response = clip_non_minus_one(parsed_response).tolist()
            er_pred_bboxes.append(parsed_response)

            # Read evaluation conclusion
            gt_conclusion = line["gt_status"].upper()
            gt_conclusion = True if gt_conclusion == "PASSED" else False

            pred_conclusion = None
            if "FAILED" in response_raw or "is not met" in response_raw:
                pred_conclusion = False
            elif "PASSED" in response_raw or "is met" in response_raw:
                pred_conclusion = True
            if pred_conclusion is None:
                # Make prediction wrong if it couldn't be parsed
                pred_conclusion = not gt_conclusion

            er_gt_conclusion.append(gt_conclusion)
            er_pred_conclusion.append(pred_conclusion)

            if lang == "DE":
                er_de_ids.append(len(er_pred_bboxes) - 1)
            elif lang == "EN":
                er_en_ids.append(len(er_pred_bboxes) - 1)

    ta_pred_bboxes = np.array(ta_pred_bboxes)
    ta_gt_bboxes = np.array(ta_gt_bboxes)
    er_pred_bboxes = np.array(er_pred_bboxes)
    er_gt_bboxes = np.array(er_gt_bboxes)
    er_pred_conclusion = np.array(er_pred_conclusion)
    er_gt_conclusion = np.array(er_gt_conclusion)
    print(f"{'Test action (pred/gt):':<{36}}{ta_pred_bboxes.shape}, {ta_gt_bboxes.shape}")
    print(f"{'Expected results (pred/gt):':<{36}}{er_pred_bboxes.shape}, {er_gt_bboxes.shape}")

    # Calculate metrics
    ta_pred_hits = pred_center_in_gt(ta_pred_bboxes, ta_gt_bboxes)
    score_ta = to_mean_percent(ta_pred_hits)

    er_pred_hits = pred_center_in_gt(er_pred_bboxes, er_gt_bboxes)
    score_er = to_mean_percent(er_pred_hits)

    overall_hits, true_gt_hits, false_gt_hits = calculate_alignment_numpy(er_pred_conclusion, er_gt_conclusion)
    score_conclusion = to_mean_percent(overall_hits)
    score_conclusion_gt_true = to_mean_percent(true_gt_hits)
    score_conclusion_gt_false = to_mean_percent(false_gt_hits)

    # Calculate language-specific metrics for TA
    score_ta_en = to_mean_percent(ta_pred_hits[ta_en_ids])
    score_ta_de = to_mean_percent(ta_pred_hits[ta_de_ids])

    # Calculate language-specific metrics for ER (bbox)
    score_er_en = to_mean_percent(er_pred_hits[er_en_ids])
    score_er_de = to_mean_percent(er_pred_hits[er_de_ids])

    # Calculate language-specific metrics for ER (conclusion)
    score_conclusion_en = to_mean_percent(overall_hits[er_en_ids])
    score_conclusion_de = to_mean_percent(overall_hits[er_de_ids])

    print(f"\n{'Test action visual grounding:':<{36}}{score_ta:.1f}")
    print(f"{'Expected result visual grounding:':<{36}}{score_er:.1f}")
    print(f"{'Expected result evaluation:':<{36}}{score_conclusion:.1f}\n")

    eval_out_path = Path(EVAL_PATH).parent / "eval_results.json"

    write_json_file(
        {
            "score_ta": score_ta,
            "score_ta_de": score_ta_de,
            "score_ta_en": score_ta_en,
            "score_er": score_er,
            "score_er_de": score_er_de,
            "score_er_en": score_er_en,
            "score_er_conclusion": score_conclusion,
            "score_er_conclusion_de": score_conclusion_de,
            "score_er_conclusion_en": score_conclusion_en,
            "score_conclusion_gt_true": score_conclusion_gt_true,
            "score_conclusion_gt_false": score_conclusion_gt_false,
        },
        path=eval_out_path,
    )
    print(f"Stored results at {eval_out_path}")

    if failed_pred_responses:
        failed_responses_out_path = Path(EVAL_PATH).parent / "failed_responses.json"
        write_json_file(failed_pred_responses, failed_responses_out_path)
        print(f"Stored non-parsable responses at {failed_responses_out_path}")

Results

Model Test Action Grounding Expected Result Grounding Expected Result Evaluation
InternVL2.5-8B 26.6 5.7 64.8
TinyClick 61.0 54.6 -
UGround-V1-7B (Qwen2-VL) 69.4 55.0 -
Molmo-7B-D-0924 71.3 71.4 66.9
LAM-270M (TinyClick) 73.9 59.9 -
ELAM-7B (Molmo) 87.6 77.5 78.2

Citation

If you find ELAM or AutomotiveUI-Bench-4K useful in your research, please cite the following paper:

@misc{ernhofer2025leveragingvisionlanguagemodelsvisual,
      title={Leveraging Vision-Language Models for Visual Grounding and Analysis of Automotive UI}, 
      author={Benjamin Raphael Ernhofer and Daniil Prokhorov and Jannica Langner and Dominik Bollmann},
      year={2025},
      eprint={2505.05895},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.05895}, 
}

Acknowledgements

Funding

This work was supported by German BMBF within the scope of project "KI4BoardNet".

Downloads last month
56