--- pipeline_tag: automatic-speech-recognition --- # How to use ``` # instantiate the pipeline from pyannote.audio import Pipeline from diarizers import SegmentationModel from pyannote.audio import Pipeline import torch device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu") # diarizers를 통해 모델 로드 segmentation_model = SegmentationModel().from_pretrained('jaeyong2/speaker-segmentation-merged') # pyannote 호환 형식으로 변환 model3 = segmentation_model.to_pyannote_model() pipeline = Pipeline.from_pretrained( "pyannote/speaker-diarization-3.1", use_auth_token=) pipeline._segmentation.model = model3 # run the pipeline on an audio file diarization = pipeline("output.wav") # dump the diarization output to disk using RTTM format with open("audio.rttm", "w") as rttm: diarization.write_rttm(rttm) ```