Reverse-Engineered Reasoning
			
	
	AI & ML interests
None defined yet.
Recent Activity
	View all activity
	
				Papers
		
		View all Papers
		
			Organization Card
		
		Multimodal Art Projection (M-A-P) is an open-source AI research community.
The community members are working on research topics in a wide range of spectrum, including but not limited to pre-training paradigm of foundation models, large-scale data collection and processing, and the derived applciations on coding, reasoning and music creativity.
The community is open to researchers keen on any relevant topic. Welcome to join us!
- Discord Channel
 - Our Full Paper List
 - mail: [email protected]
 
The development log of our Multimodal Art Projection (m-a-p) model family:
- π₯28/01/2025: We release YuE (δΉ), the most powerful open-source foundation models for music generation, specifically for transforming lyrics into full songs (lyrics2song), like Suno.ai. See demos.
 - π₯08/05/2024: We release the fully transparent large language model MAP-Neo, series models for scaling law exploraltion and post-training alignment, and along with the training corpus Matrix.
 - π₯11/04/2024: MuPT paper and demo are out. HF collection.
 - π₯08/04/2024: Chinese Tiny LLM is out. HF collection.
 - π₯28/02/2024: The release of ChatMusician's demo, code, model, data, and benchmark. π
 - π₯23/02/2024: The release of OpenCodeInterpreter, beats GPT-4 code interpreter on HumanEval.
 - 23/01/2024: we release CMMMU for better Chinese LMMs' Evaluation.
 - 13/01/2024: we release a series of Music Pretrained Transformer (MuPT) checkpoints, with size up to 1.3B and 8192 context length. Our models are LLAMA2-based, pre-trained on world's largest 10B tokens symbolic music dataset (ABC notation format). We currently support Megatron-LM format and will release huggingface checkpoints soon.
 - 02/06/2023: officially release the MERT pre-print paper and training codes.
 - 17/03/2023: we release two advanced music understanding models, MERT-v1-95M and MERT-v1-330M , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
 - 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset MERT-v0-public
 - 29/12/2022: a music understanding model MERT-v0 trained with MLM paradigm, which performs better at downstream tasks.
 - 29/10/2022: a pre-trained MIR model music2vec trained with BYOL paradigm.
 
			models
			201
		
			
	
	
	
	
	m-a-p/TreePO-Qwen2.5-7B
			Text Generation
			β’ 
		
				8B
			β’ 
	
				Updated
					
				
				β’ 
					
					83
				
	
				β’ 
					
					2
				
m-a-p/transformer_340M_baseline
		
				0.3B
			β’ 
	
				Updated
					
				
				β’ 
					
					29
				
	
				
				
m-a-p/transformer_1.3B_baseline
		
				1B
			β’ 
	
				Updated
					
				
				β’ 
					
					34
				
	
				
				
m-a-p/TreePO-Qwen2.5-7B_Naive2Low_Scheduler
		
				8B
			β’ 
	
				Updated
					
				
				β’ 
					
					66
				
	
				
				
m-a-p/TreePO-Qwen2.5-7B_Low_Prob_Encourage
		
				8B
			β’ 
	
				Updated
					
				
				β’ 
					
					68
				
	
				
				
m-a-p/TreePO-Qwen2.5-7B_GRPO-TreePO-Sampling
		
				8B
			β’ 
	
				Updated
					
				
				β’ 
					
					71
				
	
				
				
m-a-p/TreePO-Qwen2.5-7B_fixed-div
		
				8B
			β’ 
	
				Updated
					
				
				β’ 
					
					68
				
	
				
				
m-a-p/CriticLeanGPT-Qwen2.5-32B-RL
		
				33B
			β’ 
	
				Updated
					
				
				β’ 
					
					68
				
	
				
				
m-a-p/CriticLeanGPT-Qwen2.5-14B-RL
		
				15B
			β’ 
	
				Updated
					
				
				β’ 
					
					64
				
	
				β’ 
					
					1
				
m-a-p/CriticLeanGPT-Qwen2.5-7B-RL
		
				15B
			β’ 
	
				Updated
					
				
				β’ 
					
					61
				
	
				β’ 
					
					1
				
			datasets
			65
		
			
	
	
	
	
	m-a-p/CodeCriticBench
			Preview
			β’ 
	
				Updated
					
				
	
				β’ 
					
					70
				
				β’ 
					
					3
				
m-a-p/OO1-Chat-747K
	
				Updated
					
				
	
				β’ 
					
					3
				
				
				
m-a-p/PIN-200M
			Preview
			β’ 
	
				Updated
					
				
	
				β’ 
					
					47.3k
				
				β’ 
					
					19
				
m-a-p/COIG-Writer
			Preview
			β’ 
	
				Updated
					
				
	
				β’ 
					
					106
				
				β’ 
					
					21
				
m-a-p/Writing-Preference-Bench
			Preview
			β’ 
	
				Updated
					
				
	
				β’ 
					
					134
				
				β’ 
					
					3
				
m-a-p/TreePO_data
			Viewer
			β’ 
	
				Updated
					
				β’ 
			
			3.12k
	
				β’ 
					
					265
				
				
				
m-a-p/Inverse_IFEval
			Viewer
			β’ 
	
				Updated
					
				β’ 
			
			1.01k
	
				β’ 
					
					255
				
				β’ 
					
					21
				
m-a-p/PIN-14M
			Viewer
			β’ 
	
				Updated
					
				β’ 
			
			68.1k
	
				β’ 
					
					10.4k
				
				β’ 
					
					32
				
m-a-p/MTT
	
				Updated
					
				
	
				β’ 
					
					861
				
				
				
m-a-p/DeepWriting-20K
			Viewer
			β’ 
	
				Updated
					
				β’ 
			
			35.8k
	
				β’ 
					
					349
				
				β’ 
					
					26