Base models trained on 1T high-quality tokens, demonstrating strong competitiveness among existing SOTA small models (<2B).
			
	
	ParScale
community
						
						
						
						AI & ML interests
None defined yet.
Recent Activity
	View all activity
	
			models
			67
		
			
	
	
	
	
	 
				ParScale/ParScale-1.8B-P1-Inst
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					22
				
	
				• 
					
					1
				
 
				ParScale/ParScale-1.8B-P2-Inst
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					10
				
	
				
				
 
				ParScale/ParScale-1.8B-P4-Inst
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					8
				
	
				• 
					
					1
				
 
				ParScale/ParScale-1.8B-P8-Inst
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					8
				
	
				• 
					
					2
				
 
				ParScale/ParScale-1.8B-P1
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					32
				
	
				• 
					
					1
				
 
				ParScale/ParScale-1.8B-P2
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					11
				
	
				
				
 
				ParScale/ParScale-1.8B-P4
			Text Generation
			• 
		
				2B
			• 
	
				Updated
					
				
				• 
					
					17
				
	
				• 
					
					1
				
 
				ParScale/ParScale-Qwen-3B-P2-Python
			Text Generation
			• 
		
				3B
			• 
	
				Updated
					
				
				• 
					
					2
				
	
				
				
 
				ParScale/ParScale-Qwen-3B-P4-Python
			Text Generation
			• 
		
				3B
			• 
	
				Updated
					
				
				• 
					
					4
				
	
				
				
 
				ParScale/ParScale-Qwen-3B-P8-Python
			Text Generation
			• 
		
				3B
			• 
	
				Updated
					
				
				• 
					
					2
				
	
				
				
			datasets
			0
		
			
	None public yet