TinyOctopus: Bilingual Audio Language Model πŸ™πŸ”Š

πŸ“’ Overview

TinyOctopus is a Bilingual Audio Language Model (Audio-LLM) designed to process and generate text from audio inputs. The model leverages Distil-Whisper (distil-large-v3) for audio encoding, a cross-attention projection layer for alignment, and DeepSeek 1.5B for text generation. TinyOctopus is optimized for tasks such as:

  • Bilingual Automatic Speech Recognition (ASR) πŸ—£οΈ
  • Arabic to English Speech Translation 🌍
  • Spoken Arabic Dialect Identification

TinyOctopus maintaining the architectural principles of the following structure:

πŸ— Model Architecture

TinyOctopus integrates:

  1. Distil-Whisper (distil-large-v3) for encoding audio inputs.
  2. Cross-Attention Projection Layer (trainable) to align audio features with textual representations.
  3. DeepSeek 1.5B as the core language model for text generation.

πŸ“‚ Dataset

The model has been trained on multiple datasets to optimize its performance across different tasks:

  • QASR Dataset: QASR is the largest transcribed Arabic speech corpus, collected from the broadcast domain. It contains 2,000 hours of multi-dialect speech sampled at 16kHz from Al Jazeera News Channel, with lightly supervised transcriptions aligned with the audio segments. Unlike previous datasets, QASR includes linguistically motivated segmentation, punctuation, speaker information, and more. The dataset is suitable for ASR, Arabic dialect identification, punctuation restoration, speaker identification, and NLP applications. Additionally, a 130M-word language model dataset is available to aid language modeling. Speech recognition models trained on QASR achieve competitive WER compared to the MGB-2 corpus, and it has been used for downstream tasks like Named Entity Recognition (NER) and punctuation restoration.

  • ADI17 Dataset: ADI17 is a large-scale Arabic Dialect Identification (DID) dataset, collected from YouTube videos across 17 Arabic-speaking countries in the Middle East and North Africa. It contains 3,000 hours of speech for training DID systems and an additional 57 hours for development and testing. The dataset is categorized into short (<5s), medium (5-20s), and long (>20s) speech segments for detailed evaluation. ADI17 enables state-of-the-art dialect identification and provides a robust evaluation platform. It has been benchmarked on domain-mismatched conditions using the Multi-Genre Broadcast 3 (MGB-3) test set.

βš™οΈ Installation & Usage

πŸ’» Install Dependencies

pip install -r requirements.txt

Inference

from inference import transcribe

audio_path = "path/to/audio.wav"  # Replace with your actual audio file
output = transcribe(audio_path, task="asr")  # Options: "dialect", "asr", "translation"

print("Generated Text:", output)

How to Try It?

You can test the model by uploading or recording your own audio files using the Gradio demo:
➑️ Try the Model


Evaluation Results

ASR Performance (WER & Error Breakdown)

Tasks WER (%) Substitution (%) Deletion (%) Insertion (%)
ASR_QASR (Arabic) 16.00 9.5 2.7 3.8
ASR_ibrispeech&tedlium (English) 4.50 3.0 0.8 0.7

Translation Performance (BLEU Scores)

Tasks BLEU (GPT-4o) BLEU (Google)
Translation 55.05 43.23

Dialect Identification Accuracy

Tasks Accuracy (%)
Dialect Identification 70.59

Confusion matrix of adi17 test set


Examples

Example 1: Arabic Speech Recognition

🎡 Audio Input (Arabic):

πŸ“ User Prompt:

Transcribe the audio or Ω‚Ω… Ψ¨Ψͺفريغ Ψ§Ω„Ω…Ω‚Ψ·ΨΉ Ψ§Ω„Ψ΅ΩˆΨͺي

πŸ’‘ System Response:

Ψ£Ω‡Ω„Ψ§ Ψ¨ΩƒΩ… Ω…Ψ΄Ψ§Ω‡Ψ―ΩŠΩ†Ψ§ Ψ§Ω„ΩƒΨ±Ψ§Ω… في Ψ­Ω„Ω‚Ψ© جديدة Ω…Ω† Ψ¨Ψ±Ω†Ψ§Ω…Ψ¬ Ψ§Ω„Ψ§Ω‚ΨͺΨ΅Ψ§Ψ― ΩˆΨ§Ω„Ω†Ψ§Ψ³

🎡 Audio Input (English):

πŸ“ User Prompt:

Transcribe the audio or Ω‚Ω… Ψ¨Ψͺفريغ Ψ§Ω„Ω…Ω‚Ψ·ΨΉ Ψ§Ω„Ψ΅ΩˆΨͺي

πŸ’‘ System Response:

NO IT'S NOT TOO SOON


Example 2: Arabic to English Translation

🎡 Audio Input:

πŸ“ User Prompt:

Translate the following Arabic speech into English or Ω‚Ω… Ψ¨ΨͺΨ±Ψ¬Ω…Ψ© Ψ§Ω„Ω…Ω‚Ψ·ΨΉ Ω„Ω„Ψ₯Ω†Ψ¬Ω„ΩŠΨ²ΩŠΨ©

πŸ’‘ System Response:

I took a loan a certain amount of money to pay off the debt


Example 3: Dialect Identification

🎡 Audio Input:

πŸ“ User Prompt:

Identify the dialect of the given speech or Ω…Ψ§Ω‡ΩŠ Ω„Ω‡Ψ¬Ψ© Ψ§Ω„Ω…Ψͺحدث؟

πŸ’‘ System Response:

KSA


Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for SaraAlthubaiti/TinyOctopus

Finetuned
(151)
this model

Datasets used to train SaraAlthubaiti/TinyOctopus