metadata
title: Duohub
emoji: ⚡
colorFrom: indigo
colorTo: purple
sdk: static
pinned: false
Duohub - Ultra-fast Graph RAG for Voice AI
Duohub provides blazing fast graph RAG services specifically designed for voice AI and other low-latency applications, delivering context in under 50ms.
Key Features
- Graph & Vector RAG: Choose between semantic similarity search with reranking or deep query resolution with graph traversals
- Custom Ontologies: Pre-trained ontology models for different domains, with options for custom ontologies
- Advanced Processing: Built-in coreference resolution, fact extraction, and entity resolution
- Global Scale: Data replicated across multiple regions for consistent low-latency performance
- Simple Integration: Start querying your knowledge base with just three lines of code
Quick Start
from duohub import Duohub
client = Duohub(api_key="your_api_key")
response = client.query(query="Your question here", memoryID="your_memory_id")
## Why Duohub? ⭐
- 🚄 **Lightning-Fast**: Delivers query responses in under 50ms, making it ideal for real-time voice AI applications
- 🎯 **High Precision**: Graph-based memory ensures accurate and contextually relevant responses
- 🔌 **Easy Integration**: Get started with just 3 lines of code - no complex setup or infrastructure needed
- 🌍 **Global Ready**: Data replicated across 3 locations by default for consistent low-latency performance
- 🎛️ **Flexible Options**: Choose between vector or graph RAG based on your needs
- 🛠️ **Built-in Processing**: Includes coreference resolution, fact extraction, and entity resolution out of the box
- 🏢 **Enterprise Grade**: Supports on-premise deployment, custom ontologies, and dedicated support