--- language: - ms - en --- # Malaysian-Podcast-sesame-csm-1b Full parameter finetuning [sesame/csm-1b](https://huggingface.co/sesame/csm-1b) on Malaysian Podcast from [mesolitica/Malaysian-Emilia](https://huggingface.co/datasets/mesolitica/Malaysian-Emilia) where the permutation for voice conversion only select 80% similar. ## How we trained it 1. The finetuning done in FP32-BF16 mixed precision training. 2. Multipacking decoder. 3. Wandb at https://wandb.ai/huseinzol05/sesame-1b-malaysian-emilia-full-mixed-precision-podcast ## Source code Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/sesame-tts ## Acknowledgement Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!