YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Meta-Agent Project

A flexible, modular AI agent platform that supports multiple LLM providers, including local deployment via Ollama.

Overview

Meta-Agent is a framework for creating, managing, and interacting with AI agents. It provides a consistent API interface regardless of which AI provider powers your agents, allowing you to:

  • Create customized AI agents with specific personalities, knowledge, and capabilities
  • Choose from multiple AI providers (Ollama, OpenAI, etc.)
  • Manage users and their associated agents
  • Save and reuse agent templates
  • Interact with agents via text, with optional voice processing

Multi-Provider Architecture

Meta-Agent is designed with a provider-agnostic architecture that supports multiple AI backends:

Core Components:

  1. Abstract AI Service Interface: Defines a consistent API for all AI providers
  2. Provider Implementations:
    • Ollama Service: Integration with locally-hosted open-source models
    • OpenAI Service: Integration with OpenAI's commercial API
    • [Future Providers]: Easily extend with new providers (e.g., Gemini, Claude)
  3. AI Service Factory: Creates the appropriate provider implementation based on configuration

Benefits:

  • Provider Flexibility: Switch between providers without changing your application code
  • Cost Control: Use free or local models by default, with commercial APIs as an option
  • Future-Proofing: Easily add support for new providers as they emerge

Setup and Installation

Prerequisites

  • Python 3.9+
  • For local models: Ollama installed

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd meta-agent-project
    
  2. Create and activate a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
  3. Install dependencies:

    pip install -r requirements.txt
    
  4. Set up environment variables (or use defaults in config.py):

    # Create a .env file with your configuration
    touch .env
    

Configuration

The application uses a hierarchical configuration system with these options (in order of precedence):

  1. Environment variables
  2. .env file
  3. Default values in config.py

Using Ollama (Default Provider)

Ollama is the default provider when no OpenAI API key is provided, offering free, locally-hosted models.

Setting Up Ollama

  1. Install Ollama from ollama.ai
  2. Pull a model (example):
    ollama pull llama2
    
  3. Start Ollama server:
    ollama serve
    

Configuration for Ollama

In your .env file or environment variables:

OLLAMA_BASE_URL=http://localhost:11434  # Default Ollama server URL
OLLAMA_MODEL=llama2                     # Default model

Using OpenAI (Optional)

To use OpenAI as your provider:

  1. Obtain an API key from OpenAI
  2. Add to your .env file:
    OPENAI_API_KEY=your_api_key_here
    AI_PROVIDER=openai  # Override default provider
    

Project Structure

meta-agent-project/
β”œβ”€β”€ main.py                  # Application entry point
β”œβ”€β”€ README.md                # This documentation
β”œβ”€β”€ requirements.txt         # Project dependencies
β”œβ”€β”€ setup.sh                 # Setup script
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ api/                 # API endpoints and routes
β”‚   β”‚   β”œβ”€β”€ app.py           # FastAPI application
β”‚   β”‚   β”œβ”€β”€ auth/            # Authentication modules
β”‚   β”‚   └── routes/          # API routes (users, agents, templates)
β”‚   β”œβ”€β”€ config.py            # Configuration management
β”‚   β”œβ”€β”€ models/              # Data models and schemas
β”‚   └── services/            # Service modules
β”‚       β”œβ”€β”€ agent_customization/
β”‚       β”œβ”€β”€ emotion_recognition/
β”‚       β”œβ”€β”€ integration/     # External service integrations
β”‚       β”‚   β”œβ”€β”€ ai_service.py          # Abstract base class
β”‚       β”‚   β”œβ”€β”€ ai_service_factory.py  # Provider factory
β”‚       β”‚   β”œβ”€β”€ ollama_service.py      # Ollama implementation
β”‚       β”‚   └── openai_service.py      # OpenAI implementation
β”‚       β”œβ”€β”€ storage/         # Database and persistence
β”‚       β”‚   └── database/
β”‚       β”œβ”€β”€ template_management/
β”‚       β”œβ”€β”€ user_management/
β”‚       └── voice_processing/
└── tests/                   # Test modules

Basic Usage

Running the Application

python main.py

The API will be available at http://localhost:8000 by default.

API Endpoints

  • Users: /api/users/ - User management
  • Agents: /api/agents/ - Agent creation and management
  • Templates: /api/templates/ - Template management
  • Conversations: /api/conversations/ - Interact with agents

License

[License Information]

Contributing

[Contribution Guidelines]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support