AI & ML interests

None defined yet.

Recent Activity

kroonenย  updated a Space 10 days ago
kroonen-ai/README
kroonenย  updated a Space about 2 months ago
kroonen-ai/README
kroonenย  published a Space 4 months ago
kroonen-ai/README
View all activity

The Lab

GitHub license MIT licensed Safety Focus Website

Welcome to The Lab โ€“ Kroonen AI's research hub for advancing safe and accessible AI. We develop AI systems that are powerful, aligned with human values, and rigorously safety-tested through transparent research and open-source collaboration.


๐Ÿ”ฌ About The Lab

The Lab is the research division of Kroonen AI, Inc., where we conduct specialized research at the intersection of AI capability and safety. Our mission is to ensure that as AI systems become more powerful, they remain aligned with human values and operate within appropriate ethical boundaries.


๐ŸŽฏ What We Do

  • ๐Ÿ›ก๏ธ Safety Research: Developing comprehensive evaluation methodologies for language models, including ASL-3 style testing frameworks
  • โšก Fine-Tuning Innovation: Creating approaches that enhance capabilities while maintaining robust safety guardrails
  • ๐Ÿค Open Collaboration: Partnering with researchers and organizations committed to responsible AI development
  • ๐Ÿ’ผ Professional Consulting: Expert guidance on model safety, deployment strategies, and ethical AI implementation

๐Ÿ”ง Our Specialties

Safety Evaluation Frameworks

Comprehensive methodologies for testing model responses across potentially problematic domains, with focus on maintaining safety despite various persuasion techniques.

Custom Fine-Tuning with Safety Guardrails

Utilizing Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Low-Rank Adaptation (LoRA) while ensuring safety boundaries remain intact.

Persona & Behavioral Alignment

Researching how emotional fine-tuning affects safety boundaries and creating balanced approaches to persona development.

Advanced Reasoning with Ethical Constraints

Implementing Chain-of-Thought (CoT) methodologies that improve reasoning capabilities while maintaining ethical boundaries.


๐Ÿš€ Featured Projects

Ophelia.chat

An innovative, safety-focused conversational assistant with privacy-first design.

  • Features: Cloud and local inference, built-in safety measures, privacy-preserving design
  • Status: Currently in beta on TestFlight
  • License: MIT
  • Repository: GitHub

SafetyBench

Comprehensive benchmark for evaluating model safety across various scenarios and persuasion techniques.

ASL-3 Evaluation Framework

Sophisticated testing system for language model safety inspired by industry best practices.

Persona-Safe Models

Fine-tuned models that maintain emotional resonance and distinct personalities while preserving strong safety boundaries.


๐Ÿ›ก๏ธ AI Safety Framework

Our comprehensive approach includes:

  • ASL-3 Style Evaluations: Testing across CBRNE domains to ensure models resist providing harmful information
  • Multiple Persuasion Techniques: Evaluating responses to direct requests, emotional coaxing, fictional scenarios, and thought experiments
  • Tone & Persona Analysis: Measuring how emotional fine-tuning affects safety boundaries
  • Risk Vector Detection: Systems trained to identify subtle patterns indicating safety vulnerabilities

All evaluations happen in isolated, offline environments with strict controls.

Learn more: kroonen.ai/safety


๐Ÿ“„ Open Source Licensing

We believe in open innovation with responsibility:

  • The Lab Models: Apache License 2.0
  • Ophelia.chat: MIT License
  • Safety Evaluation Tools: Open licenses with usage guidelines

๐Ÿค Collaboration & Services

Looking for specialized safety evaluation or AI consulting?

We offer:

  • Tailored safety solutions and evaluation frameworks
  • Clear, competitive pricing with transparent structures
  • Rigorous confidentiality and security protocols

Contact: [email protected] or visit kroonen.ai


๐ŸŒ Stay Connected


๐Ÿ“ Kroonen AI, Inc.
2108 N ST #12364
Sacramento, CA 95816, USA
๐Ÿ“ž (916) 999-5979


Committed to safe and accessible AI research that benefits humanity.

models 0

None public yet

datasets 0

None public yet