A newer version of the Gradio SDK is available:
5.33.0
metadata
title: AI Research Hub - Complete HuggingFace Demo
emoji: ๐
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.32.0
app_file: app.py
pinned: false
license: mit
short_description: Complete HuggingFace Inference API demo platform
tags:
- nlp
- computer-vision
- text-generation
- question-answering
- text-classification
- zero-shot-classification
- named-entity-recognition
- text-to-image
- sentiment-analysis
- translation
- summarization
- feature-extraction
- huggingface
- inference-api
- gradio
- ai-demo
๐ AI Research Hub - Complete HuggingFace Demo
A comprehensive demonstration of HuggingFace's Inference API capabilities, optimized for Hugging Face Spaces.
โจ Features
๐ Text Processing
- ๐ฌ Chat - Conversational AI with advanced language models
- ๐ญ Fill Mask - Text completion and prediction
- โ Question Answering - Extract answers from context
- ๐ Summarization - Automatic text summarization
๐ท๏ธ Classification & Analysis
- ๐ท๏ธ Sentiment Analysis - Emotion and sentiment detection
- ๐ฏ Zero-Shot Classification - Classify with custom labels
- ๐ท๏ธ Named Entity Recognition - Extract people, places, organizations
- ๐งฎ Text Similarity - Compare semantic similarity between texts
๐จ Multimodal Capabilities
- ๐ Translation - English to French translation
- ๐ผ๏ธ Image Classification - Identify objects and scenes in images
- ๐จ Text-to-Image - Generate images from text descriptions
๐ง Setup Instructions
For Hugging Face Spaces:
Fork/Duplicate this Space
Set up your HuggingFace Token:
- Go to Settings โ Repository secrets
- Add a new secret:
HF_TOKEN
- Value: Your HuggingFace token from here
That's it! The Space will automatically restart and all features will be available.
For Local Development:
# Clone the repository
git clone <your-repo-url>
cd <repo-name>
# Install dependencies
pip install -r requirements.txt
# Set environment variable
export HF_TOKEN="your_token_here"
# Run the application
python app.py
๐ฏ Key Improvements
โ Fixed Issues
- Question Answering: Proper input format for API calls
- Text Classification: Working models with proper error handling
- Zero-Shot Classification: Correct API method usage
- Named Entity Recognition: Fixed entity extraction and labeling
- Image Classification: Better error handling for uploads
๐ Enhanced Features
- Spaces Optimization: Memory and performance optimized for HF Spaces
- Robust Error Handling: Clear error messages and fallback strategies
- Modern UI: Clean, responsive interface with organized tabs
- Token Management: Multiple token source detection for Spaces
- Text Similarity: Semantic comparison with cosine similarity scores
๐ Models Used
Task | Model | Description |
---|---|---|
Chat | microsoft/DialoGPT-medium |
Conversational AI |
Fill Mask | distilbert-base-uncased |
Lightweight BERT model |
Q&A | distilbert-base-cased-distilled-squad |
SQuAD-trained model |
Summarization | facebook/bart-large-cnn |
CNN-trained BART |
Sentiment | cardiffnlp/twitter-roberta-base-sentiment-latest |
Twitter sentiment |
Zero-Shot | facebook/bart-large-mnli |
MNLI-trained BART |
NER | dslim/bert-base-NER |
CoNLL-trained BERT |
Translation | Helsinki-NLP/opus-mt-en-fr |
English-French translator |
Embeddings | sentence-transformers/all-MiniLM-L6-v2 |
Sentence embeddings |
Image Classification | google/vit-base-patch16-224 |
Vision Transformer |
Text-to-Image | runwayml/stable-diffusion-v1-5 |
Stable Diffusion |
๐ Privacy & Security
- No Data Storage: All processing happens in real-time, no data is stored
- Secure Token Handling: Tokens are handled securely through Spaces secrets
- API Rate Limiting: Built-in handling for API rate limits and quotas
๐จ Troubleshooting
Common Issues:
"API client not available"
- Solution: Set
HF_TOKEN
in Spaces settings
- Solution: Set
"Rate limit reached"
- Solution: Wait a moment and try again
"Model loading"
- Solution: Some models need time to load, retry after a few seconds
"Service unavailable"
- Solution: Temporary HuggingFace service issue, try again later
๐ ๏ธ Technical Details
Architecture
- Frontend: Gradio with custom CSS
- Backend: HuggingFace Inference API
- Deployment: Optimized for HF Spaces environment
- Error Handling: Comprehensive error catching and user feedback
Performance Optimizations
- Memory Efficient: Minimal memory footprint for Spaces
- Fast Models: Selected for quick response times
- Graceful Degradation: Features degrade gracefully if dependencies missing
- Connection Pooling: Efficient API client management
๐ Future Enhancements
- Advanced Visualization: Interactive embedding plots
- Batch Processing: Multiple text processing
- Custom Model Support: User-specified models
- Audio Processing: Speech-to-text capabilities
- Real-time Streaming: Live text generation
๐ค Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
๐ License
This project is open source and available under the MIT License.
๐ Acknowledgments
- HuggingFace for the amazing Inference API and Spaces platform
- Gradio for the intuitive interface framework
- Open Source Community for the incredible model ecosystem
Built with โค๏ธ for the AI research community