harpreetsahota commited on
Commit
6c1ddd2
·
verified ·
1 Parent(s): fdca187

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -108
README.md CHANGED
@@ -48,7 +48,7 @@ dataset_summary: '
48
 
49
  # Note: other available arguments include ''max_samples'', etc
50
 
51
- dataset = load_from_hub("harpreetsahota/mind2web_multimodal_test_task")
52
 
53
 
54
  # Launch the App
@@ -60,11 +60,11 @@ dataset_summary: '
60
  '
61
  ---
62
 
63
- # Dataset Card for mind2web_multimodal_test_task
64
-
65
- <!-- Provide a quick summary of the dataset. -->
66
 
 
67
 
 
68
 
69
 
70
 
@@ -86,141 +86,121 @@ from fiftyone.utils.huggingface import load_from_hub
86
 
87
  # Load the dataset
88
  # Note: other available arguments include 'max_samples', etc
89
- dataset = load_from_hub("harpreetsahota/mind2web_multimodal_test_task")
90
 
91
  # Launch the App
92
  session = fo.launch_app(dataset)
93
  ```
94
 
95
 
96
- ## Dataset Details
97
-
98
- ### Dataset Description
99
-
100
- <!-- Provide a longer summary of what this dataset is. -->
101
-
102
 
 
 
 
 
103
 
104
- - **Curated by:** [More Information Needed]
105
- - **Funded by [optional]:** [More Information Needed]
106
- - **Shared by [optional]:** [More Information Needed]
107
- - **Language(s) (NLP):** en
108
- - **License:** [More Information Needed]
109
 
110
- ### Dataset Sources [optional]
111
-
112
- <!-- Provide the basic links for the dataset. -->
113
-
114
- - **Repository:** [More Information Needed]
115
- - **Paper [optional]:** [More Information Needed]
116
- - **Demo [optional]:** [More Information Needed]
117
 
118
  ## Uses
119
 
120
- <!-- Address questions around how the dataset is intended to be used. -->
121
-
122
  ### Direct Use
123
-
124
- <!-- This section describes suitable use cases for the dataset. -->
125
-
126
- [More Information Needed]
127
 
128
  ### Out-of-Scope Use
129
-
130
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
131
-
132
- [More Information Needed]
133
 
134
  ## Dataset Structure
135
-
136
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
137
-
138
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
  ## Dataset Creation
141
-
142
  ### Curation Rationale
143
-
144
- <!-- Motivation for the creation of this dataset. -->
145
-
146
- [More Information Needed]
147
 
148
  ### Source Data
149
-
150
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
151
-
152
  #### Data Collection and Processing
153
-
154
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
155
-
156
- [More Information Needed]
157
 
158
  #### Who are the source data producers?
 
159
 
160
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
161
-
162
- [More Information Needed]
163
-
164
- ### Annotations [optional]
165
-
166
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
167
-
168
  #### Annotation process
169
-
170
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
171
-
172
- [More Information Needed]
173
 
174
  #### Who are the annotators?
 
175
 
176
- <!-- This section describes the people or systems who created the annotations. -->
177
-
178
- [More Information Needed]
179
-
180
- #### Personal and Sensitive Information
181
-
182
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
183
-
184
- [More Information Needed]
185
 
186
  ## Bias, Risks, and Limitations
187
-
188
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
189
-
190
- [More Information Needed]
191
-
192
- ### Recommendations
193
-
194
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
195
-
196
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
197
-
198
- ## Citation [optional]
199
-
200
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
201
-
202
- **BibTeX:**
203
-
204
- [More Information Needed]
205
-
206
- **APA:**
207
-
208
- [More Information Needed]
209
-
210
- ## Glossary [optional]
211
-
212
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
213
-
214
- [More Information Needed]
215
-
216
- ## More Information [optional]
217
-
218
- [More Information Needed]
219
-
220
- ## Dataset Card Authors [optional]
221
-
222
- [More Information Needed]
223
 
224
  ## Dataset Card Contact
225
-
226
- [More Information Needed]
 
48
 
49
  # Note: other available arguments include ''max_samples'', etc
50
 
51
+ dataset = load_from_hub("Voxel51/mind2web_multimodal_test_task")
52
 
53
 
54
  # Launch the App
 
60
  '
61
  ---
62
 
63
+ # Dataset Card for Multimodal Mind2Web "Cross-Task" Test Split
 
 
64
 
65
+ **Note**: This dataset is the test split of the Cross-Task dataset introduced in the paper.
66
 
67
+ ![image/png](m2w_tt.gif)
68
 
69
 
70
 
 
86
 
87
  # Load the dataset
88
  # Note: other available arguments include 'max_samples', etc
89
+ dataset = load_from_hub("Voxel51/mind2web_multimodal_test_task")
90
 
91
  # Launch the App
92
  session = fo.launch_app(dataset)
93
  ```
94
 
95
 
96
+ ## Dataset Description
 
 
 
 
 
97
 
98
+ **Curated by:** The Ohio State University NLP Group (OSU-NLP-Group)
99
+ **Shared by:** OSU-NLP-Group on Hugging Face
100
+ **Language(s) (NLP):** en
101
+ **License:** OPEN-RAIL License
102
 
103
+ ## Dataset Source
 
 
 
 
104
 
105
+ **Repository:** https://github.com/OSU-NLP-Group/SeeAct and https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web
106
+ **Paper:** "GPT-4V(ision) is a Generalist Web Agent, if Grounded" by Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su
107
+ **Demo:** https://osu-nlp-group.github.io/SeeAct
 
 
 
 
108
 
109
  ## Uses
110
 
 
 
111
  ### Direct Use
112
+ - Evaluating web agents' ability to generalize to new tasks on familiar websites
113
+ - Benchmarking LMMs and LLMs on web navigation tasks
114
+ - Training and fine-tuning models for web navigation
115
+ - Testing model performance on tasks that require following multi-step instructions
116
 
117
  ### Out-of-Scope Use
118
+ - Developing web agents for harmful purposes (as stated in the paper's impact statement)
119
+ - Automating actions that could violate website terms of service
120
+ - Creating agents that access users' personal profiles or perform sensitive operations without consent
 
121
 
122
  ## Dataset Structure
123
+ - Contains 177 tasks across 17 domains and 64 websites
124
+ - Tasks average 7.6 actions each
125
+ - Average 4,172 visual tokens per task
126
+ - Average 607 HTML elements per task
127
+ - Average 123,274 HTML tokens per task
128
+ - Each example includes task descriptions, HTML structure, operations (CLICK, TYPE, SELECT), target elements with attributes, and action histories
129
+
130
+
131
+ ### FiftyOne Dataset Structure
132
+
133
+ **Basic Info:** 1,338 web UI screenshots with task-based annotations
134
+
135
+ **Core Fields:**
136
+ - `action_uid`: StringField - Unique action identifier
137
+ - `annotation_id`: StringField - Annotation identifier
138
+ - `target_action_index`: IntField - Index of target action in sequence
139
+ - `ground_truth`: EmbeddedDocumentField(Detection) - Element to interact with:
140
+ - `label`: Action type (TYPE, CLICK)
141
+ - `bounding_box`: a list of relative bounding box coordinates in [0, 1] in the following format: `<top-left-x>, <top-left-y>, <width>, <height>]`
142
+ - `target_action_reprs`: String representation of target action
143
+ - `website`: EmbeddedDocumentField(Classification) - Website name
144
+ - `domain`: EmbeddedDocumentField(Classification) - Website domain category
145
+ - `subdomain`: EmbeddedDocumentField(Classification) - Website subdomain category
146
+ - `task_description`: StringField - Natural language description of the task
147
+ - `full_sequence`: ListField(StringField) - Complete sequence of actions for the task
148
+ - `previous_actions`: ListField - Actions already performed in the sequence
149
+ - `current_action`: StringField - Action to be performed
150
+ - `alternative_candidates`: EmbeddedDocumentField(Detections) - Other possible elements
151
 
152
  ## Dataset Creation
 
153
  ### Curation Rationale
154
+ The Cross-Task split was specifically designed to evaluate an agent's ability to generalize to new tasks on websites and domains it has already encountered during training.
 
 
 
155
 
156
  ### Source Data
 
 
 
157
  #### Data Collection and Processing
158
+ - Based on the original MIND2WEB dataset
159
+ - Each HTML document is aligned with its corresponding webpage screenshot image
160
+ - Underwent human verification to confirm element visibility and correct rendering for action prediction
 
161
 
162
  #### Who are the source data producers?
163
+ Web screenshots and HTML were collected from 64 websites across 17 domains that were also represented in the training data.
164
 
165
+ ### Annotations
 
 
 
 
 
 
 
166
  #### Annotation process
167
+ Each task includes annotated action sequences showing the correct steps to complete the task. These were likely captured through a tool that records user actions on websites.
 
 
 
168
 
169
  #### Who are the annotators?
170
+ Researchers from The Ohio State University NLP Group or hired annotators, though specific details aren't provided in the paper.
171
 
172
+ ### Personal and Sensitive Information
173
+ The dataset focuses on non-login tasks to comply with user agreements and avoid privacy issues.
 
 
 
 
 
 
 
174
 
175
  ## Bias, Risks, and Limitations
176
+ - Performance on this split is generally better than Cross-Website and Cross-Domain, as models can leverage knowledge of website structures
177
+ - Supervised fine-tuning methods show advantages on this split compared to in-context learning
178
+ - The dataset may contain biases present in the original websites
179
+ - Website layouts and functionality may change over time, affecting the validity of the dataset
180
+
181
+ ## Citation
182
+
183
+ ### BibTeX:
184
+
185
+ ```bibtex
186
+ @article{zheng2024seeact,
187
+ title={GPT-4V(ision) is a Generalist Web Agent, if Grounded},
188
+ author={Boyuan Zheng and Boyu Gou and Jihyung Kil and Huan Sun and Yu Su},
189
+ booktitle={Forty-first International Conference on Machine Learning},
190
+ year={2024},
191
+ url={https://openreview.net/forum?id=piecKJ2DlB},
192
+ }
193
+
194
+ @inproceedings{deng2023mindweb,
195
+ title={Mind2Web: Towards a Generalist Agent for the Web},
196
+ author={Xiang Deng and Yu Gu and Boyuan Zheng and Shijie Chen and Samuel Stevens and Boshi Wang and Huan Sun and Yu Su},
197
+ booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
198
+ year={2023},
199
+ url={https://openreview.net/forum?id=kiYqbO3wqw}
200
+ }
201
+ ```
202
+ ### APA:
203
+ Zheng, B., Gou, B., Kil, J., Sun, H., & Su, Y. (2024). GPT-4V(ision) is a Generalist Web Agent, if Grounded. arXiv preprint arXiv:2401.01614.
 
 
 
 
 
 
 
 
204
 
205
  ## Dataset Card Contact
206
+ GitHub: https://github.com/OSU-NLP-Group/SeeAct