CLVP
Overview
CLVP (Contrastive Language-Voice Pretrained Transformer) モデルは、James Betker によって Better speech synthesis through scaling で提案されました。
論文の要約は次のとおりです。
*近年、画像生成の分野は自己回帰変換器と DDPM の応用によって革命を起こしています。これらのアプローチは、画像生成のプロセスを段階的な確率的プロセスとしてモデル化し、大量のコンピューティングとデータを活用して画像の分布を学習します。パフォーマンスを向上させるこの方法論は、画像に限定される必要はありません。この論文では、画像生成ドメインの進歩を音声合成に適用する方法について説明します。その結果、表現力豊かなマルチ音声テキスト読み上げシステムである TorToise が誕生しました。
このモデルは Susnato Dhar によって提供されました。 元のコードは ここ にあります。
Usage tips
- CLVP は Tortoise TTS モデルの不可欠な部分です。
- CLVP を使用して、生成されたさまざまな音声候補を提供されたテキストと比較することができ、最良の音声トークンが拡散モデルに転送されます。
- Tortoise の使用には、
ClvpModelForConditionalGeneration.generate()
メソッドの使用を強くお勧めします。 - 16 kHz を期待する他のオーディオ モデルとは対照的に、CLVP モデルはオーディオが 22.05 kHz でサンプリングされることを期待していることに注意してください。
Brief Explanation:
- ClvpTokenizer はテキスト入力をトークン化し、ClvpFeatureExtractor は目的のオーディオからログ メル スペクトログラムを抽出します。
ClvpConditioningEncoder
は、これらのテキスト トークンとオーディオ表現を取得し、テキストとオーディオに基づいて条件付けされた埋め込みに変換します。- ClvpForCausalLM は、これらの埋め込みを使用して複数の音声候補を生成します。
- 各音声候補は音声エンコーダ (ClvpEncoder) を通過してベクトル表現に変換され、テキスト エンコーダ (ClvpEncoder) はテキスト トークンを同じ潜在空間に変換します。
- 最後に、各音声ベクトルをテキスト ベクトルと比較して、どの音声ベクトルがテキスト ベクトルに最も類似しているかを確認します。
ClvpModelForConditionalGeneration.generate()
は、上記のすべてのロジックを 1 つのメソッドに圧縮します。
例 :
>>> import datasets
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library).
>>> text = "This is an example text."
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
>>> sample = ds[0]["audio"]
>>> # Define processor and model.
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # Generate processor output and model output.
>>> processor_output = processor(raw_speech=sample["array"], sampling_rate=sample["sampling_rate"], text=text, return_tensors="pt")
>>> generated_output = model.generate(**processor_output)
ClvpConfig
class transformers.ClvpConfig
< source >( text_config = None speech_config = None decoder_config = None projection_dim = 768 logit_scale_init_value = 2.6592 initializer_factor = 1.0 **kwargs )
Parameters
- text_config (
dict
, optional) — Dictionary of configuration options used to initialize the CLVP text encoder. - speech_config (
dict
, optional) — Dictionary of configuration options used to initialize CLVP speech encoder. - decoder_config (
dict
, optional) — Dictionary of configuration options used to initialize ClvpDecoderConfig. - projection_dim (
int
, optional, defaults to 768) — Dimentionality of text and speech projection layers. - logit_scale_init_value (
float
, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLVP implementation. - initializer_factor (
float
, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization testing). - kwargs (optional) — Dictionary of keyword arguments.
ClvpConfig is the configuration class to store the configuration of a ClvpModelForConditionalGeneration. It is used to instantiate a CLVP model according to the specified arguments, defining the text model, speech model and decoder model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLVP susnato/clvp_dev architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import ClvpConfig, ClvpModelForConditionalGeneration
>>> # Initializing a ClvpConfig with susnato/clvp_dev style configuration
>>> configuration = ClvpConfig()
>>> # Initializing a ClvpModelForConditionalGeneration (with random weights) from the susnato/clvp_dev style configuration
>>> model = ClvpModelForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a CLVPConfig from a CLVPTextConfig, CLVPSpeechConfig and a CLVPAutoRegressiveConfig
>>> from transformers import ClvpEncoderConfig, ClvpDecoderConfig
>>> # Initializing a CLVP text, CLVP speech and CLVP decoder configuration
>>> config_text = ClvpEncoderConfig()
>>> config_speech = ClvpEncoderConfig()
>>> decoder_config = ClvpDecoderConfig()
>>> config = ClvpConfig.from_sub_model_configs(config_text, config_speech, decoder_config)
from_sub_model_configs
< source >( text_config: ClvpEncoderConfig speech_config: ClvpEncoderConfig decoder_config: ClvpDecoderConfig **kwargs ) → ClvpConfig
Parameters
- text_config (
ClvpEncoderConfig
) — Text model configuration of type ClvpEncoderConfig. - speech_config (
ClvpEncoderConfig
) — Speech model configuration of type ClvpEncoderConfig. - decoder_config (
ClvpDecoderConfig
) — Decoder model configuration of type ClvpDecoderConfig.
Returns
An instance of a configuration object
Instantiate a ClvpConfig (or a derived class) from CLVP text model configuration, CLVP speech model configuration and CLVP decoder model configuration.
ClvpEncoderConfig
class transformers.ClvpEncoderConfig
< source >( vocab_size = 256 hidden_size = 768 intermediate_size = 1536 projection_dim = 768 num_hidden_layers = 20 num_attention_heads = 12 hidden_act = 'gelu' layer_norm_eps = 1e-05 attention_dropout = 0.1 dropout = 0.1 use_rotary_embedding = True use_attention_bias = False summary_type = 'mean' initializer_factor = 1.0 bos_token_id = 255 eos_token_id = 0 **kwargs )
Parameters
- vocab_size (
int
, optional, defaults to 256) — Vocabulary size of the CLVP Encoder model. - hidden_size (
int
, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - intermediate_size (
int
, optional, defaults to 1536) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - projection_dim (
int
, optional, defaults to 768) — Dimensionality of the projection vector. - num_hidden_layers (
int
, optional, defaults to 20) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int
, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - hidden_act (
str
orfunction
, optional, defaults to"gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
"quick_gelu"
are supported. - layer_norm_eps (
float
, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. - attention_dropout (
float
, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. - dropout (
float
, optional, defaults to 0.1) — The dropout ratio for the feed-forward layers inClvpEncoderMLP
. - use_rotary_embedding (
bool
, optional, defaults toTrue
) — Whether to use rotary_embedding or not. - use_attention_bias (
bool
, optional, defaults toFalse
) — Whether to use bias in Query, Key and Value layers during self attention. - summary_type (
str
, optional, defaults to"mean"
) — What strategy to use to get pooler_output from the last_hidden_state."last"
,"first"
,"mean"
and"cls_index"
are supported. - initializer_factor (
float
, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization testing). - bos_token_id (
int
, optional, defaults to 255) — Beginning of sequence token id. - eos_token_id (
int
, optional, defaults to 0) — End of sequence token id.
This is the configuration class to store the configuration of a ClvpEncoder. It is used to instantiate a CLVP text or CLVP speech encoder according to the specified arguments. Instantiating a configuration with the defaults will yield a similar configuration to that of the encoder of the CLVP susnato/clvp_dev architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import ClvpEncoderConfig, ClvpEncoder
>>> # Initializing a ClvpEncoderConfig with susnato/clvp_dev style configuration
>>> encoder_configuration = ClvpEncoderConfig()
>>> # Initializing a ClvpEncoder (with random weights) from the susnato/clvp_dev style configuration
>>> model = ClvpEncoder(encoder_configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
ClvpDecoderConfig
class transformers.ClvpDecoderConfig
< source >( vocab_size = 8194 max_position_embeddings = 608 max_text_tokens = 404 hidden_size = 1024 num_hidden_layers = 30 num_attention_heads = 16 n_inner = None num_mel_attn_blocks = 6 activation_function = 'gelu_new' resid_pdrop = 0.1 embd_pdrop = 0.1 attention_dropout = 0.1 layer_norm_epsilon = 1e-05 initializer_range = 0.02 summary_type = 'cls_index' summary_use_proj = True summary_activation = None summary_proj_to_labels = True summary_first_dropout = 0.1 use_cache = True bos_token_id = 8192 eos_token_id = 8193 feature_size = 80 use_attention_bias = True initializer_factor = 1.0 decoder_fixing_codes = [83, 45, 45, 248] **kwargs )
Parameters
- vocab_size (
int
, optional, defaults to 8194) — Vocabulary size of the model. - max_position_embeddings (
int
, optional, defaults to 608) — The maximum sequence length of mel tokens that this model might ever be used with. Similar ton_positions
inGPT2Config
. - max_text_tokens (
int
, optional, defaults to 404) — The maximum sequence length of text tokens that this model might ever be used with. Similar ton_positions
inGPT2Config
. - hidden_size (
int
, optional, defaults to 1024) — Dimensionality of the embeddings and hidden states. - num_hidden_layers (
int
, optional, defaults to 30) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int
, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. - n_inner (
int
, optional) — Dimensionality of the inner feed-forward layers.None
will set it to 4 timeshidden_size
. - num_mel_attn_blocks (
int
, optional, defaults to 6) — Denotes the number of self attention layers inClvpConditioningEncoder
. - activation_function (
str
, optional, defaults to"gelu_new"
) — Activation function, to be selected in the list["relu", "silu", "gelu", "tanh", "gelu_new"]
. - resid_pdrop (
float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (
float
, optional, defaults to 0.1) — The dropout ratio for the embeddings. - attention_dropout (
float
, optional, defaults to 0.1) — The dropout ratio for the attention. - layer_norm_epsilon (
float
, optional, defaults to 1e-05) — The epsilon to use in the layer normalization layers. - initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - summary_type (
string
, optional, defaults to"cls_index"
) — Argument used when doing sequence summary.Has to be one of the following options:
"last"
: Take the last token hidden state (like XLNet)."first"
: Take the first token hidden state (like BERT)."mean"
: Take the mean of all tokens hidden states."cls_index"
: Supply a Tensor of classification token position (like GPT/GPT-2)."attn"
: Not implemented now, use multi-head attention.
- summary_use_proj (
bool
, optional, defaults toTrue
) — Whether or not to add a projection after the vector extraction. - summary_activation (
str
, optional) — Pass"tanh"
for a tanh activation to the output, any other value will result in no activation. - summary_proj_to_labels (
bool
, optional, defaults toTrue
) — Whether the projection outputs should haveconfig.num_labels
orconfig.hidden_size
classes. - summary_first_dropout (
float
, optional, defaults to 0.1) — The dropout ratio to be used after the projection and activation. - use_cache (
bool
, optional, defaults toTrue
) — Whether or not the model should return the last key/values attentions (not used by all models). - bos_token_id (
int
, optional, defaults to 8192) — Beginning of sequence token id, used at the start of the generation. - eos_token_id (
int
, optional, defaults to 8193) — End of sequence token id, used in the methodClvpModelForConditionalGeneration.fix_speech_decoder_output()
to correct decoder outputs. - feature_size (
int
, optional, defaults to 80) — The feature dimension of the extracted mel features. This value is used inClvpConditioningEncoder
. - use_attention_bias (
bool
, optional, defaults toTrue
) — Whether to use bias in Query, Key and Value layers during self attention. - initializer_factor (
float
, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization testing). - decoder_fixing_codes (
list
, optional, defaults to[83, 45, 45, 248]
) — These values are used in the methodfix_speech_decoder_output
to fix decoder generated outputs.
This is the configuration class to store the configuration of a ClvpDecoder. It is used to instantiate a CLVP Decoder Model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Decoder part of the CLVP susnato/clvp_dev architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
The architecture is similar to GPT2.
Example:
>>> from transformers import ClvpDecoderConfig, ClvpDecoder
>>> # Initializing a ClvpDecoderConfig with susnato/clvp_dev style configuration
>>> decoder_configuration = ClvpDecoderConfig()
>>> # Initializing a ClvpDecoder (with random weights) from the susnato/clvp_dev style configuration
>>> model = ClvpDecoder(decoder_configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
ClvpTokenizer
class transformers.ClvpTokenizer
< source >( vocab_file merges_file errors = 'replace' unk_token = '[UNK]' bos_token = '<|endoftext|>' eos_token = '[STOP]' pad_token = '[STOP]' add_prefix_space = False add_bos_token = False add_eos_token = False **kwargs )
Parameters
- vocab_file (
str
) — Path to the vocabulary file. - merges_file (
str
) — Path to the merges file. - errors (
str
, optional, defaults to"replace"
) — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. - unk_token (
str
, optional, defaults to"[UNK]"
) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - bos_token (
str
, optional, defaults to"<|endoftext|>"
) — The beginning of sequence token. - eos_token (
str
, optional, defaults to"[STOP]"
) — The end of sequence token. - pad_token (
str
, optional, defaults to"[STOP]"
) — The pad token of the sequence. - add_prefix_space (
bool
, optional, defaults toFalse
) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CLVP tokenizer detect beginning of words by the preceding space). - add_bos_token (
bool
, optional, defaults toFalse
) — Whether to addbos_token
in front of the sequence when add_special_tokens=True. - add_eos_token (
bool
, optional, defaults toFalse
) — Whether to addeos_token
in end of the sequence when add_special_tokens=True.
Construct a CLVP tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
>>> from transformers import ClvpTokenizer
>>> tokenizer = ClvpTokenizer.from_pretrained("susnato/clvp_dev")
>>> tokenizer("Hello world")["input_ids"]
[62, 84, 28, 2, 179, 79]
>>> tokenizer(" Hello world")["input_ids"]
[2, 62, 84, 28, 2, 179, 79]
You can get around that behavior by passing add_prefix_space=True
when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True
, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
ClvpFeatureExtractor
class transformers.ClvpFeatureExtractor
< source >( feature_size = 80 sampling_rate = 22050 default_audio_length = 6 hop_length = 256 chunk_length = 30 n_fft = 1024 padding_value = 0.0 mel_norms = None return_attention_mask = False **kwargs )
Parameters
- feature_size (
int
, optional, defaults to 80) — The feature dimension of the extracted features. - sampling_rate (
int
, optional, defaults to 22050) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). - default_audio_length (
int
, optional, defaults to 6) — The default length of raw audio in seconds. Ifmax_length
is not set during__call__
then it will automatically be set to default_audio_length *self.sampling_rate
. - hop_length (
int
, optional, defaults to 256) — Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients. - chunk_length (
int
, optional, defaults to 30) — The maximum number of chuncks ofsampling_rate
samples used to trim and pad longer or shorter audio sequences. - n_fft (
int
, optional, defaults to 1024) — Size of the Fourier transform. - padding_value (
float
, optional, defaults to 0.0) — Padding value used to pad the audio. Should correspond to silences. - mel_norms (
list
of lengthfeature_size
, optional) — Ifmel_norms
is provided then it will be used to normalize the log-mel spectrograms along each mel-filter. - return_attention_mask (
bool
, optional, defaults toFalse
) — Whether to return the attention mask. If left to the default, it will return the attention mask.
Constructs a CLVP feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts log-mel-spectrogram features from raw speech using a custom numpy implementation of the Short Time Fourier Transform
which should match pytorch’s torch.stft
equivalent.
__call__
< source >( raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]] sampling_rate: typing.Optional[int] = None truncation: bool = True pad_to_multiple_of: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_attention_mask: typing.Optional[bool] = True padding: typing.Optional[str] = 'max_length' max_length: typing.Optional[int] = None **kwargs )
Parameters
- raw_speech (
np.ndarray
,List[float]
,List[np.ndarray]
,List[List[float]]
) — The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep. - sampling_rate (
int
, optional) — The sampling rate at which theraw_speech
input was sampled. It is strongly recommended to passsampling_rate
at the forward call to prevent silent errors and allow automatic speech recognition pipeline. - truncation (
bool
, optional, default toTrue
) — Activates truncation to cut input sequences longer than max_length to max_length. - pad_to_multiple_of (
int
, optional) — If set will pad the sequence to a multiple of the provided value.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. - return_attention_mask (
bool
, optional, defaults toTrue
) — Whether to return the attention mask. If left to the default, it will return the attention mask. - return_tensors (
str
or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are:'tf'
: Return TensorFlowtf.constant
objects.'pt'
: Return PyTorchtorch.Tensor
objects.'np'
: Return Numpynp.ndarray
objects.
- padding_value (
float
, defaults to 0.0) — The value that is used to fill the padding values / vectors. - max_length (
int
, optional) — The maximum input length of the inputs.
ClvpFeatureExtractor
is used to extract various voice specific properties such as the pitch and tone of the
voice, speaking speed, and even speaking defects like a lisp or stuttering from a sample voice or raw_speech
.
First the voice is padded or truncated in a way such that it becomes a waveform of self.default_audio_length
seconds long and then the log-mel spectrogram is extracted from it.
ClvpProcessor
class transformers.ClvpProcessor
< source >( feature_extractor tokenizer )
Parameters
- feature_extractor (
ClvpFeatureExtractor
) — An instance of ClvpFeatureExtractor. The feature extractor is a required input. - tokenizer (
ClvpTokenizer
) — An instance of ClvpTokenizer. The tokenizer is a required input.
Constructs a CLVP processor which wraps a CLVP Feature Extractor and a CLVP Tokenizer into a single processor.
ClvpProcessor offers all the functionalities of ClvpFeatureExtractor and ClvpTokenizer. See the call(), decode() and batch_decode() for more information.
Forwards the audio
and sampling_rate
arguments to call() and the text
argument to call(). Please refer to the doctsring of the above two methods for more
information.
This method forwards all its arguments to ClvpTokenizer’s decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to ClvpTokenizer’s batch_decode(). Please refer to the docstring of this method for more information.
ClvpModelForConditionalGeneration
class transformers.ClvpModelForConditionalGeneration
< source >( config: ClvpConfig )
Parameters
- config (ClvpConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The composite CLVP model with a text encoder, speech encoder and speech decoder model.The speech decoder model generates the speech_ids from the text and the text encoder and speech encoder workstogether to filter out the best speech_ids. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: LongTensor = None input_features: FloatTensor = None conditioning_encoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None text_encoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = False return_dict: typing.Optional[bool] = None ) → transformers.models.clvp.modeling_clvp.ClvpOutput
or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- input_features (
torch.FloatTensor
of shape(batch_size, feature_size, time_dim)
) — Indicates log mel-spectrogram representations for audio returned by ClvpFeatureExtractor. - conditioning_encoder_inputs_embeds (
torch.FloatTensor
, optional) — inputs_embeds forClvpConditioningEncoder
. Can be used in place ofinput_ids
. - text_encoder_inputs_embeds (
torch.FloatTensor
, optional) — inputs_embeds for the text encoder model passed in place ofinput_ids
. - attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding text token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- return_loss (
bool
, optional) — Whether or not to return the contrastive loss. - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clvp.modeling_clvp.ClvpOutput
or tuple(torch.FloatTensor)
A transformers.models.clvp.modeling_clvp.ClvpOutput
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (<class 'transformers.models.clvp.configuration_clvp.ClvpConfig'>
) and inputs.
- loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenreturn_loss
isTrue
) — Contrastive loss for speech-text similarity. - speech_ids (
torch.LongTensor
, optional) — speech_ids (or speech candidates) generated by theClvpForCausalLM
model. - logits_per_speech (
torch.FloatTensor
of shape(speech_batch_size, text_batch_size)
) — The scaled dot product scores betweenspeech_embeds
andtext_embeds
. This represents the speech-text similarity scores. - logits_per_text (
torch.FloatTensor
of shape(text_batch_size, speech_batch_size)
) — The scaled dot product scores betweentext_embeds
andspeech_embeds
. This represents the text-speech similarity scores. - text_embeds (
torch.FloatTensor
of shape(batch_size, output_dim
) — The text embeddings obtained by applying the projection layer to the pooled output of the text encoder model. - speech_embeds (
torch.FloatTensor
of shape(batch_size, output_dim
) — The speech embeddings obtained by applying the projection layer to the pooled output of the speech encoder model. - text_model_output (
BaseModelOutputWithPooling
) — The pooled output of thelast_hidden_state
of the text encoder Model. - speech_model_output (
BaseModelOutputWithPooling
) — The pooled output of thelast_hidden_state
of the speech encoder Model. - decoder_hidden_states (
torch.FloatTensor
, optional) — The hidden states of the decoder model. - text_encoder_hidden_states (
torch.FloatTensor
, optional) — The hidden states of the text encoder model. - speech_encoder_hidden_states (
torch.FloatTensor
, optional) — The hidden states of the speech encoder model.
The ClvpModelForConditionalGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> import datasets
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library)
>>> text = "This is an example text."
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
>>> _, audio, sr = ds.sort("id").select(range(1))[:1]["audio"][0].values()
>>> # Define processor and model
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # processor outputs and model outputs
>>> processor_output = processor(raw_speech=audio, sampling_rate=sr, text=text, return_tensors="pt")
>>> outputs = model(
... input_ids=processor_output["input_ids"],
... input_features=processor_output["input_features"],
... return_dict=True,
... )
generate
< source >( input_ids: LongTensor = None input_features: FloatTensor = None attention_mask: typing.Optional[torch.LongTensor] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None pad_to_max_mel_tokens: typing.Optional[int] = None output_hidden_states: typing.Optional[bool] = None **kwargs ) → ClvpOutput
or tuple
Parameters
- input_ids (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Input text Tokens. Processed from the ClvpTokenizer. - input_features (
torch.FloatTensor
of shape(batch_size, feature_size, time_dim)
, optional) — Indicates log-melspectrogram representations for audio returned by ClvpFeatureExtractor. - attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding text token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- generation_config (
~generation.GenerationConfig
, optional) — The generation configuration to be used as base parametrization for the generation call.**kwargs
passed to generate matching the attributes ofgeneration_config
will override them. Ifgeneration_config
is not provided, the default will be used, which had the following loading priority: 1) from thegeneration_config.json
model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. - pad_to_max_mel_tokens (
int
, optional) — Pads generated speech_ids to the specified value. This is to implement the same logic from the official repo, link: https://github.com/neonbjb/tortoise-tts/blob/80f89987a5abda5e2b082618cd74f9c7411141dc/tortoise/api.py#L430 and to make sure the logits are same. This does not affect generation quality so please don’t consider using it since it is less efficient. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of decoder model, text encoder and speech encoder models.
Returns
ClvpOutput
or tuple
A ClvpOutput
(if return_dict_in_generate=True
or when
config.return_dict_in_generate=True
) or a tuple.
Generate method for ClvpModelForConditionalGeneration
, this method calls the generate
method of
ClvpForCausalLM
and then uses those generated speech_ids
to process text_embeds
and speech_embeds
using
ClvpEncoder
.
get_text_features
< source >( input_ids: typing.Optional[torch.LongTensor] = None text_encoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None ) → torch.FloatTensor
of shape (batch_size, output_dim)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. - text_encoder_inputs_embeds (
torch.FloatTensor
, optional) — inputs_embeds for the text encoder model passed in place ofinput_ids
. - attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Returns
torch.FloatTensor
of shape (batch_size, output_dim)
The text embeddings obtained by applying the projection layer to the pooled output of the CLVP Text Model.
This method can be used to extract text_embeds from a text. The text embeddings obtained by applying the projection layer to the pooled output of the CLVP text encoder model.
Examples:
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
>>> # Define the Text
>>> text = "This is an example text."
>>> # Define processor and model
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # Generate processor output and text embeds
>>> processor_output = processor(text=text, return_tensors="pt")
>>> text_embeds = model.get_text_features(input_ids=processor_output["input_ids"])
get_speech_features
< source >( speech_ids: typing.Optional[torch.LongTensor] = None input_ids: typing.Optional[torch.LongTensor] = None input_features: typing.Optional[torch.FloatTensor] = None conditioning_encoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None **kwargs ) → torch.FloatTensor
of shape (batch_size, output_dim)
Parameters
- speech_ids (
torch.LongTensor
of shape(batch_size, num_speech_ids)
, optional) — Speech Tokens. Padding will be ignored by default should you provide it. If speech_ids are provided then input_ids and input_features will be automatically ignored. - input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Input text Tokens. Processed from the ClvpTokenizer. If speech_ids is not provided, then input_ids and input_features will be used. - input_features (
torch.FloatTensor
of shape(batch_size, feature_size, time_dim)
, optional) — Indicates log-melspectrogram representations for audio returned by ClvpFeatureExtractor. If speech_ids is not provided, then input_ids and input_features will be used. - conditioning_encoder_inputs_embeds (
torch.FloatTensor
, optional) — inputs_embeds forClvpConditioningEncoder
. Can be used in place ofinput_ids
. - attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding speech token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- generation_config (
GenerationConfig
, optional) — generation config to control the generation of speech_ids if they are not provided.
Returns
torch.FloatTensor
of shape (batch_size, output_dim)
The speech embeddings obtained by applying the projection layer to the pooled output of the CLVP Speech Model.
This method can be used to extract speech_embeds. The speech embeddings are obtained by applying the speech model on speech_ids. If speech_ids is not present but both input_ids and input_features are given then the decoder model will be used to first generate the speech_ids and then applying the speech model.
Examples:
>>> import datasets
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library)
>>> text = "This is an example text."
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
>>> _, audio, sr = ds.sort("id").select(range(1))[:1]["audio"][0].values()
>>> # Define processor and model
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # Generate processor output and model output
>>> processor_output = processor(raw_speech=audio, sampling_rate=sr, text=text, return_tensors="pt")
>>> speech_embeds = model.get_speech_features(
... input_ids=processor_output["input_ids"], input_features=processor_output["input_features"]
... )
ClvpForCausalLM
class transformers.ClvpForCausalLM
< source >( config )
Parameters
- config (ClvpConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The CLVP decoder model with a language modelling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- past_key_values (
Tuple[Tuple[torch.Tensor]]
of lengthconfig.n_layers
) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (seepast_key_values
output below). Can be used to speed up sequential decoding. Theinput_ids
which have their past given to this model should not be passed asinput_ids
as they have already been computed. - attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
If
past_key_values
is used,attention_mask
needs to contain the masking strategy that was used forpast_key_values
. In other words, theattention_mask
always has to have the length:len(past_key_values) + len(input_ids)
- token_type_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
, optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]
:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.If
past_key_values
is used, optionally only the lastinputs_embeds
have to be input (seepast_key_values
). - use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can setlabels = input_ids
Indices are selected in[-100, 0, ..., config.vocab_size]
All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]
The ClvpForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
ClvpModel
class transformers.ClvpModel
< source >( config: ClvpDecoderConfig )
Parameters
- config (ClvpConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Clvp decoder model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- past_key_values (
Tuple[Tuple[torch.Tensor]]
of lengthconfig.n_layers
) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (seepast_key_values
output below). Can be used to speed up sequential decoding. Theinput_ids
which have their past given to this model should not be passed asinput_ids
as they have already been computed. - attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
If
past_key_values
is used,attention_mask
needs to contain the masking strategy that was used forpast_key_values
. In other words, theattention_mask
always has to have the length:len(past_key_values) + len(input_ids)
- token_type_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
, optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]
:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.If
past_key_values
is used, optionally only the lastinputs_embeds
have to be input (seepast_key_values
). - use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
The ClvpModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
ClvpEncoder
Transformer encoder consisting of config.num_hidden_layers
self attention layers. Each layer is a
ClvpEncoderLayer
.
forward
< source >( input_ids: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
, optional) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — input embeddings for the model. This bypasses the model’s internal embedding lookup matrix. - attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensor
, optional) — Denotes the position ids ofinput_ids
. - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
ClvpDecoder
Transformer decoder consisting of config.num_hidden_layers layers. Each layer is a ClvpDecoderLayer
forward
< source >( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- past_key_values (
Tuple[Tuple[torch.Tensor]]
of lengthconfig.n_layers
) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (seepast_key_values
output below). Can be used to speed up sequential decoding. Theinput_ids
which have their past given to this model should not be passed asinput_ids
as they have already been computed. - attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
If
past_key_values
is used,attention_mask
needs to contain the masking strategy that was used forpast_key_values
. In other words, theattention_mask
always has to have the length:len(past_key_values) + len(input_ids)
- token_type_ids (
torch.LongTensor
of shape(batch_size, input_ids_length)
, optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in[0, 1]
:- 0 corresponds to a sentence A token,
- 1 corresponds to a sentence B token.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.max_position_embeddings - 1]
. - head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in[0, 1]
:- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.If
past_key_values
is used, optionally only the lastinputs_embeds
have to be input (seepast_key_values
). - use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
The ClvpDecoder forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.