Meta-Agent Project
A flexible, modular AI agent platform that supports multiple LLM providers, including local deployment via Ollama.
Overview
Meta-Agent is a framework for creating, managing, and interacting with AI agents. It provides a consistent API interface regardless of which AI provider powers your agents, allowing you to:
- Create customized AI agents with specific personalities, knowledge, and capabilities
- Choose from multiple AI providers (Ollama, OpenAI, etc.)
- Manage users and their associated agents
- Save and reuse agent templates
- Interact with agents via text, with optional voice processing
Multi-Provider Architecture
Meta-Agent is designed with a provider-agnostic architecture that supports multiple AI backends:
Core Components:
- Abstract AI Service Interface: Defines a consistent API for all AI providers
- Provider Implementations:
- Ollama Service: Integration with locally-hosted open-source models
- OpenAI Service: Integration with OpenAI's commercial API
- [Future Providers]: Easily extend with new providers (e.g., Gemini, Claude)
- AI Service Factory: Creates the appropriate provider implementation based on configuration
Benefits:
- Provider Flexibility: Switch between providers without changing your application code
- Cost Control: Use free or local models by default, with commercial APIs as an option
- Future-Proofing: Easily add support for new providers as they emerge
Setup and Installation
Prerequisites
- Python 3.9+
- For local models: Ollama installed
Installation
Clone the repository:
git clone <repository-url> cd meta-agent-project
Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Set up environment variables (or use defaults in config.py):
# Create a .env file with your configuration touch .env
Configuration
The application uses a hierarchical configuration system with these options (in order of precedence):
- Environment variables
- .env file
- Default values in config.py
Using Ollama (Default Provider)
Ollama is the default provider when no OpenAI API key is provided, offering free, locally-hosted models.
Setting Up Ollama
- Install Ollama from ollama.ai
- Pull a model (example):
ollama pull llama2
- Start Ollama server:
ollama serve
Configuration for Ollama
In your .env
file or environment variables:
OLLAMA_BASE_URL=http://localhost:11434 # Default Ollama server URL
OLLAMA_MODEL=llama2 # Default model
Using OpenAI (Optional)
To use OpenAI as your provider:
- Obtain an API key from OpenAI
- Add to your
.env
file:OPENAI_API_KEY=your_api_key_here AI_PROVIDER=openai # Override default provider
Project Structure
meta-agent-project/
βββ main.py # Application entry point
βββ README.md # This documentation
βββ requirements.txt # Project dependencies
βββ setup.sh # Setup script
βββ src/
β βββ api/ # API endpoints and routes
β β βββ app.py # FastAPI application
β β βββ auth/ # Authentication modules
β β βββ routes/ # API routes (users, agents, templates)
β βββ config.py # Configuration management
β βββ models/ # Data models and schemas
β βββ services/ # Service modules
β βββ agent_customization/
β βββ emotion_recognition/
β βββ integration/ # External service integrations
β β βββ ai_service.py # Abstract base class
β β βββ ai_service_factory.py # Provider factory
β β βββ ollama_service.py # Ollama implementation
β β βββ openai_service.py # OpenAI implementation
β βββ storage/ # Database and persistence
β β βββ database/
β βββ template_management/
β βββ user_management/
β βββ voice_processing/
βββ tests/ # Test modules
Basic Usage
Running the Application
python main.py
The API will be available at http://localhost:8000 by default.
API Endpoints
- Users:
/api/users/
- User management - Agents:
/api/agents/
- Agent creation and management - Templates:
/api/templates/
- Template management - Conversations:
/api/conversations/
- Interact with agents
License
[License Information]
Contributing
[Contribution Guidelines]