HowToStep / README.md
zeqianli's picture
Update README.md
520feb5 verified

HowToStep

HowToStep is an automatically generated, large-scale and high-quality dataset that transforms ASR transcripts into descriptive steps by prompting the LLM and then aligns steps to the video through a two-stage determination procedure.

[project page] [Arxiv] [GitHub]

Analysis

HowToStep transforms the original transcripts of HTM-370K into around 4M ordered instructional steps with start/end timestamps for almost 340K videos after filtering. As shown in figure, the average steps (sentences) per video is 10.6 and the average words per step is 8.0.

Download

We provide a tar.gz file. After decompression, each folder contains files named by vid.pth.

Data Instances

{'vid': '_sAn5Pp9GxQ', 
'start': [33, 36, 42, ..., 398], 
'end': [41, 44, 50, ..., 406], 
'text': ['Add pasta to boiling water.', 'Keep boiling until pasta is al dente.', 'Quinoa pasta, corn pasta, or brown rice pasta.', ..., 'Check out the creator's quick prep meal plan program for more recipe ideas.']}

Data Fields

  • vid (str): ID of the video.
  • start/end (List of int): start/end time of steps in video.
  • text (List of str): descriptive steps generated by large language model.

Citation

We appreciate your use of HowToStep in your work. If you find this repository helpful, please consider citing it. Feel free to contact [email protected] or open an issue if you have any questions.

@article{li2023strong,
title={A Strong Baseline for Temporal Video-Text Alignment},
author={Li, Zeqian and Chen, Qirui and Han, Tengda and Zhang, Ya and Wang, Yanfeng and Xie, Weidi},
journal={arXiv preprint arXiv:2312.14055},
year={2023}
}