Update README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# HowToStep
|
2 |
+
|
3 |
+
HowToStep is an automatically generated, large-scale and high-quality dataset that transforms ASR transcripts into descriptive steps by prompting the LLM and then aligns steps to the video through a two-stage determination procedure.
|
4 |
+
|
5 |
+
[[project page]](https://lzq5.github.io/Video-Text-Alignment/)
|
6 |
+
[[Arxiv]](https://arxiv.org/abs/2312.14055)
|
7 |
+
[[GitHub]](https://github.com/Lzq5/Video-Text-Alignment)
|
8 |
+
|
9 |
+
## Analysis
|
10 |
+
|
11 |
+
![](src/analysis.png)
|
12 |
+
|
13 |
+
HowToStep transforms the original transcripts of [*HTM-370K*](https://www.robots.ox.ac.uk/~vgg/research/tan/htm_sentencify_stats.html) into around 4M ordered instructional steps with start/end timestamps for almost 340K videos after filtering.
|
14 |
+
As shown in figure, the average steps (sentences) per video is 10.6 and the average words per step is 8.0.
|
15 |
+
|
16 |
+
## Download
|
17 |
+
|
18 |
+
We provide a tar.gz file. After decompression, each folder contains files named by `vid.pth`.
|
19 |
+
|
20 |
+
### Data Instances
|
21 |
+
|
22 |
+
```
|
23 |
+
{'vid': '_sAn5Pp9GxQ',
|
24 |
+
'start': [33, 36, 42, ..., 398],
|
25 |
+
'end': [41, 44, 50, ..., 406],
|
26 |
+
'text': ['Add pasta to boiling water.', 'Keep boiling until pasta is al dente.', 'Quinoa pasta, corn pasta, or brown rice pasta.', ..., 'Check out the creator's quick prep meal plan program for more recipe ideas.']}
|
27 |
+
```
|
28 |
+
|
29 |
+
### Data Fields
|
30 |
+
|
31 |
+
* vid (str): ID of the video.
|
32 |
+
* start/end (List of int): start/end time of steps in video.
|
33 |
+
* text (List of str): descriptive steps generated by large language model.
|
34 |
+
|
35 |
+
## Citation
|
36 |
+
We appreciate your use of HowToStep in your work. If you find this repository helpful, please consider citing it. Feel free to contact [email protected] or open an issue if you have any questions.
|
37 |
+
```bibtex
|
38 |
+
@article{li2023strong,
|
39 |
+
title={A Strong Baseline for Temporal Video-Text Alignment},
|
40 |
+
author={Li, Zeqian and Chen, Qirui and Han, Tengda and Zhang, Ya and Wang, Yanfeng and Xie, Weidi},
|
41 |
+
journal={arXiv preprint arXiv:2312.14055},
|
42 |
+
year={2023}
|
43 |
+
}
|
44 |
+
```
|