Spaces:
Running
Running
| # π Deploy LinkScout Backend on Hugging Face Spaces (FREE) | |
| ## Why Hugging Face Spaces? | |
| - β **16GB RAM FREE** (vs Render's 512MB) | |
| - β **Perfect for ML models** (built for this) | |
| - β **Free GPU option** available | |
| - β **Persistent storage** for models | |
| - β **No credit card** required | |
| - β **Always on** (no sleep like Render) | |
| --- | |
| ## π¦ Step-by-Step Deployment | |
| ### Step 1: Create Hugging Face Account | |
| 1. Go to https://huggingface.co/join | |
| 2. Sign up (free, no credit card) | |
| 3. Verify your email | |
| ### Step 2: Create a New Space | |
| 1. Go to https://huggingface.co/new-space | |
| 2. Fill in details: | |
| - **Owner**: Your username | |
| - **Space name**: `linkscout-backend` | |
| - **License**: MIT | |
| - **Select SDK**: **Gradio** (we'll use custom Docker) | |
| - **Space hardware**: **CPU basic (free)** - 16GB RAM! | |
| - **Visibility**: Public | |
| 3. Click **"Create Space"** | |
| ### Step 3: Prepare Files for HuggingFace | |
| We need to create a few HuggingFace-specific files: | |
| #### 3.1 Create `app.py` (Entry point for HuggingFace) | |
| Create `D:\LinkScout\app.py`: | |
| ```python | |
| # This is a wrapper for HuggingFace Spaces | |
| # It imports and runs the Flask server from combined_server.py | |
| if __name__ == '__main__': | |
| import combined_server | |
| # Server will start automatically when combined_server is imported | |
| ``` | |
| #### 3.2 Create `Dockerfile` (for HuggingFace Spaces) | |
| Create `D:\LinkScout\Dockerfile`: | |
| ```dockerfile | |
| FROM python:3.11-slim | |
| # Set working directory | |
| WORKDIR /app | |
| # Install system dependencies | |
| RUN apt-get update && apt-get install -y \ | |
| build-essential \ | |
| curl \ | |
| git \ | |
| && rm -rf /var/lib/apt/lists/* | |
| # Copy requirements | |
| COPY requirements.txt . | |
| # Install Python dependencies | |
| RUN pip install --no-cache-dir -r requirements.txt | |
| # Copy application code | |
| COPY . . | |
| # Create cache directory for models | |
| RUN mkdir -p ./models_cache | |
| # Expose port (HuggingFace uses port 7860 by default) | |
| EXPOSE 7860 | |
| # Set environment variables | |
| ENV PORT=7860 | |
| ENV PYTHONUNBUFFERED=1 | |
| # Run the application | |
| CMD ["python", "combined_server.py"] | |
| ``` | |
| #### 3.3 Create `README.md` for HuggingFace Space | |
| Create `D:\LinkScout\README_SPACE.md`: | |
| ```markdown | |
| --- | |
| title: LinkScout Backend | |
| emoji: π | |
| colorFrom: orange | |
| colorTo: yellow | |
| sdk: docker | |
| pinned: false | |
| --- | |
| # LinkScout AI-Powered Misinformation Detection Backend | |
| This is the backend API for LinkScout, featuring: | |
| - π€ 8 Pre-trained ML Models | |
| - π¬ 8-Phase Revolutionary Detection | |
| - π§ Groq AI Integration | |
| - π Real-time Fact Checking | |
| ## API Endpoints | |
| - `POST /analyze` - Analyze text for misinformation | |
| - `GET /health` - Health check | |
| - `POST /feedback` - Submit RL feedback | |
| ## Environment Variables Required | |
| Set these in Space Settings β Variables: | |
| - `GROQ_API_KEY` - Your Groq API key | |
| - `GOOGLE_API_KEY` - (Optional) Google Search API key | |
| - `GOOGLE_CSE_ID` - (Optional) Google Custom Search Engine ID | |
| ``` | |
| ### Step 4: Update Port for HuggingFace | |
| HuggingFace Spaces use port **7860** by default. Update `combined_server.py`: | |
| Find the port configuration section and update: | |
| ```python | |
| if __name__ == '__main__': | |
| import os | |
| # HuggingFace Spaces uses port 7860, Render uses env PORT | |
| port = int(os.environ.get('PORT', 7860)) | |
| print(f" π Port: {port}") | |
| app.run(host='0.0.0.0', port=port, debug=False, threaded=True, use_reloader=False) | |
| ``` | |
| ### Step 5: Push to HuggingFace | |
| #### Option A: Using Git (Recommended) | |
| ```powershell | |
| # Add HuggingFace as a remote | |
| cd D:\LinkScout | |
| git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/linkscout-backend | |
| # Push to HuggingFace | |
| git push hf main | |
| ``` | |
| **Replace `YOUR_USERNAME`** with your HuggingFace username! | |
| #### Option B: Upload Files Manually | |
| 1. Go to your Space: `https://huggingface.co/spaces/YOUR_USERNAME/linkscout-backend` | |
| 2. Click **"Files"** tab | |
| 3. Click **"Add file"** β **"Upload files"** | |
| 4. Upload all your files (drag & drop the entire `LinkScout` folder) | |
| ### Step 6: Set Environment Variables | |
| 1. Go to your Space settings | |
| 2. Click **"Settings"** tab | |
| 3. Scroll to **"Repository secrets"** | |
| 4. Add secrets: | |
| - Name: `GROQ_API_KEY`, Value: `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` | |
| - Name: `GOOGLE_API_KEY`, Value: `AIzaSyA9yKthZUnPAHFnmsnCZoikpvfUteJiX0s` | |
| - Name: `GOOGLE_CSE_ID`, Value: `11cbd494597034810` | |
| ### Step 7: Wait for Build | |
| The Space will automatically build when you push files: | |
| 1. Go to **"Logs"** tab to watch build progress | |
| 2. First build takes **10-15 minutes** (downloads models) | |
| 3. Look for: | |
| ``` | |
| β RoBERTa loaded | |
| β Emotion model loaded | |
| ... | |
| π Starting LinkScout server on port 7860... | |
| Running on http://0.0.0.0:7860 | |
| ``` | |
| ### Step 8: Get Your API URL | |
| Once deployed, your backend will be at: | |
| ``` | |
| https://YOUR_USERNAME-linkscout-backend.hf.space | |
| ``` | |
| **Example endpoints:** | |
| - Health check: `https://YOUR_USERNAME-linkscout-backend.hf.space/health` | |
| - Analyze: `https://YOUR_USERNAME-linkscout-backend.hf.space/analyze` | |
| --- | |
| ## π― Advantages of HuggingFace Spaces | |
| | Feature | Render Free | HuggingFace Free | | |
| |---------|-------------|------------------| | |
| | **RAM** | 512MB β | 16GB β | | |
| | **ML Models** | Can't load β | Perfect β | | |
| | **Sleep** | After 15 min β οΈ | Always on β | | |
| | **Build Time** | Fast | Fast | | |
| | **Custom Domain** | Yes β | Yes β | | |
| | **For Your App** | Won't work β | Perfect! β | | |
| --- | |
| ## π§ Troubleshooting | |
| ### Build Fails | |
| - Check logs for errors | |
| - Verify all files uploaded | |
| - Check Dockerfile syntax | |
| ### Models Won't Load | |
| - Check you have 16GB RAM selected (CPU basic) | |
| - Verify requirements.txt has all dependencies | |
| - Check logs for download errors | |
| ### API Not Responding | |
| - Check Space is "Running" (not building) | |
| - Verify port 7860 is used | |
| - Test health endpoint first | |
| ### Environment Variables Not Working | |
| - Make sure they're set as "Repository secrets" | |
| - Restart the Space after adding secrets | |
| --- | |
| ## π Next Steps After Deployment | |
| 1. **Test your backend:** | |
| ```bash | |
| curl https://YOUR_USERNAME-linkscout-backend.hf.space/health | |
| ``` | |
| 2. **Update your frontend** to use new URL: | |
| - Replace `http://localhost:5000` with your HF Space URL | |
| - Update in `app/search/page.tsx` | |
| 3. **Update extension** to use new URL: | |
| - Update `extension/popup.js` API_URL | |
| 4. **Deploy frontend on Vercel** (separate from backend) | |
| --- | |
| ## π‘ Cost Comparison | |
| - **Render Free**: Can't run your app (512MB limit) | |
| - **Railway Free**: $5/month credit, will run out | |
| - **HuggingFace Free**: Unlimited, perfect for ML | |
| - **Paid Options**: $7-20/month for more RAM | |
| **HuggingFace Spaces is the ONLY truly free option that will work for your ML-heavy backend!** | |
| --- | |
| ## π Resources | |
| - HuggingFace Spaces Docs: https://huggingface.co/docs/hub/spaces | |
| - Docker SDK Guide: https://huggingface.co/docs/hub/spaces-sdks-docker | |
| - HuggingFace Git: https://huggingface.co/docs/hub/repositories-getting-started | |
| Good luck! π | |