
The Lab
AI & ML interests
None defined yet.
Recent Activity
The Lab
Welcome to The Lab โ Kroonen AI's research hub for advancing safe and accessible AI. We develop AI systems that are powerful, aligned with human values, and rigorously safety-tested through transparent research and open-source collaboration.
๐ฌ About The Lab
The Lab is the research division of Kroonen AI, Inc., where we conduct specialized research at the intersection of AI capability and safety. Our mission is to ensure that as AI systems become more powerful, they remain aligned with human values and operate within appropriate ethical boundaries.
๐ฏ What We Do
- ๐ก๏ธ Safety Research: Developing comprehensive evaluation methodologies for language models, including ASL-3 style testing frameworks
- โก Fine-Tuning Innovation: Creating approaches that enhance capabilities while maintaining robust safety guardrails
- ๐ค Open Collaboration: Partnering with researchers and organizations committed to responsible AI development
- ๐ผ Professional Consulting: Expert guidance on model safety, deployment strategies, and ethical AI implementation
๐ง Our Specialties
Safety Evaluation Frameworks
Comprehensive methodologies for testing model responses across potentially problematic domains, with focus on maintaining safety despite various persuasion techniques.
Custom Fine-Tuning with Safety Guardrails
Utilizing Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Low-Rank Adaptation (LoRA) while ensuring safety boundaries remain intact.
Persona & Behavioral Alignment
Researching how emotional fine-tuning affects safety boundaries and creating balanced approaches to persona development.
Advanced Reasoning with Ethical Constraints
Implementing Chain-of-Thought (CoT) methodologies that improve reasoning capabilities while maintaining ethical boundaries.
๐ Featured Projects
Ophelia.chat
An innovative, safety-focused conversational assistant with privacy-first design.
- Features: Cloud and local inference, built-in safety measures, privacy-preserving design
- Status: Currently in beta on TestFlight
- License: MIT
- Repository: GitHub
SafetyBench
Comprehensive benchmark for evaluating model safety across various scenarios and persuasion techniques.
ASL-3 Evaluation Framework
Sophisticated testing system for language model safety inspired by industry best practices.
Persona-Safe Models
Fine-tuned models that maintain emotional resonance and distinct personalities while preserving strong safety boundaries.
๐ก๏ธ AI Safety Framework
Our comprehensive approach includes:
- ASL-3 Style Evaluations: Testing across CBRNE domains to ensure models resist providing harmful information
- Multiple Persuasion Techniques: Evaluating responses to direct requests, emotional coaxing, fictional scenarios, and thought experiments
- Tone & Persona Analysis: Measuring how emotional fine-tuning affects safety boundaries
- Risk Vector Detection: Systems trained to identify subtle patterns indicating safety vulnerabilities
All evaluations happen in isolated, offline environments with strict controls.
Learn more: kroonen.ai/safety
๐ Open Source Licensing
We believe in open innovation with responsibility:
- The Lab Models: Apache License 2.0
- Ophelia.chat: MIT License
- Safety Evaluation Tools: Open licenses with usage guidelines
๐ค Collaboration & Services
Looking for specialized safety evaluation or AI consulting?
We offer:
- Tailored safety solutions and evaluation frameworks
- Clear, competitive pricing with transparent structures
- Rigorous confidentiality and security protocols
Contact: [email protected] or visit kroonen.ai
๐ Stay Connected
- ๐ Website: kroonen.ai
- ๐ GitHub: github.com/kroonen-ai
- ๐ค Hugging Face: huggingface.co/kroonen-ai
- ๐ผ LinkedIn: linkedin.com/company/kroonen-ai
- ๐ฆ Bluesky: kroonen.ai
- ๐ฆ X (Twitter): @kroonen_ai
- ๐ Mastodon: @kroonen_ai
๐ Kroonen AI, Inc.
2108 N ST #12364
Sacramento, CA 95816, USA
๐ (916) 999-5979
Committed to safe and accessible AI research that benefits humanity.