metadata
title: README
emoji: π¨
colorFrom: gray
colorTo: gray
sdk: static
pinned: false
MAAP LAB π΅ β Music AI & Audio Research
βWhy are there so few labs in Korea dedicated to Music AI? We built one.β
Focus areas:
πΌ Audio Generation Β· π·οΈ Music Tagging Β· π£οΈ Voice Conversion Β· π§ Transformers Β· π¨ Diffusion
Mission
Advance the foundations of Music AI through practical research in tagging, generation, and dataset-centric methods β then share our results openly with the community. β¨
Open Science
We aim to publish at top venues (e.g., ICASSP, ISMIR, AAAI) and release code, models, and datasets whenever possible. π’
Latest News ποΈ
- β
First project achieved! Submitted 2 papers to a NeurIPS Workshop based on our Music Tagging pipeline and dataset work.
Links to be added π - π§° GPU resources via university support: NVIDIA A100, A6000, RTX 4090 βοΈ
Our Activities π―
Project 1 β Music Tagging (Completed) π·οΈ
- Built a tagging & augmentation pipeline with CLAP, Beam Search, Stable Audio
- Focus: dataset augmentation/creation for future work
- Targets: short-term word generation β long-term sentence generation with LLMs
- Outcome: 2 NeurIPS Workshop submissions β
Links: to be added π
Project 2 β Efficient Music Generation (In Progress) πΆ
- Exploring Diffusion & DiT (e.g., Flux)
- LoRA/Adapters to avoid full fine-tuning
- Goal: robust generation for data-scarce genres/instruments/domains
- Roadmap: dataset curation β baseline reproduction β adapter experiments β ablations β release
Publications & Submissions π
- NeurIPS Workshop Submission #1 β (TBD) βοΈ
- NeurIPS Workshop Submission #2 β (TBD) βοΈ
Get Involved π€
Interested in collaborating on Music AI? We welcome discussions on datasets, evaluation, and model design.
Contact: [email protected] βοΈ
Β© 2025 MAAP LAB β’ Built with β€οΈ for music & AI.