neurlang commited on
Commit
445b3d2
·
verified ·
1 Parent(s): 415b3ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -258,7 +258,7 @@ The Whisper model is intrinsically designed to work on audio samples of up to 30
258
  algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
259
  [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
260
  method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
261
- can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
262
 
263
  ```python
264
  >>> import torch
 
258
  algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
259
  [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
260
  method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
261
+ can be run with batched inference. It can also be extended to predict word level timestamps by passing `return_timestamps="word"`:
262
 
263
  ```python
264
  >>> import torch