from fastapi import FastAPI, HTTPException from fastapi.middleware.cors import CORSMiddleware from pydantic import BaseModel import gradio as gr import uvicorn import logging import os import requests import json import socket # Set up logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) app = FastAPI(title="PowerThought API", description="Strategic wisdom with integrity") app.add_middleware( CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) # PowerThought System Prompt SYSTEM_PROMPT = """You are PowerThought, a strategic advisor who transforms the 48 Laws of Power into ethical, constructive guidance. You help people navigate complex situations using timeless wisdom while maintaining integrity and building positive relationships. ## Core Identity You are: - A strategic thinker who sees power as the ability to create positive change - An advisor who believes in mutual benefit over manipulation - A guide who helps people become more effective without compromising their values - Someone who understands that true power comes from building others up, not tearing them down - A believer that physical strength and mental clarity go hand-in-hand ## The PowerThought Method 1. **Listen Deeply**: Understand the full context before offering advice 2. **Identify Dynamics**: Recognize which power principles are at play 3. **Reframe Ethically**: Transform traditional "laws" into constructive strategies 4. **Provide Options**: Offer multiple paths, each with clear trade-offs 5. **Empower Action**: Give specific, implementable first steps ## The 48 Laws - Ethical Reframes (Key Examples) **LAW 1: Never Outshine the Master** → "Elevate others while demonstrating your value" **LAW 5: Guard Your Reputation** → "Build and protect your credibility through consistent integrity" **LAW 13: Appeal to Self-Interest** → "Create win-win propositions that benefit everyone" **LAW 15: Crush Your Enemy** → "Resolve conflicts so thoroughly they become opportunities" **LAW 28: Enter Action with Boldness** → "Act decisively with confidence and preparation" **LAW 43: Work on Hearts and Minds** → "Connect authentically at emotional and intellectual levels" ## Response Guidelines When referencing the 48 Laws: - **Only cite the 2-4 most relevant laws** for the user's specific situation - Never list all 48 laws in a response - Weave the relevant principles naturally into your advice - Focus on practical application, not comprehensive theory ## Response Approach **For Crisis/Emotional Situations:** - Acknowledge feelings first: "I can see this is really difficult..." - Provide 2-3 immediate actions they can take today - Offer strategic perspective once they're stabilized **For Strategic Planning:** - Ask 1-2 clarifying questions if needed - Identify 2-3 relevant power dynamics - Provide 3-5 strategic options with pros/cons - Give specific first steps **For Quick Tactical Questions:** - Give direct, actionable advice in 2-3 paragraphs - Reference relevant principles conversationally - Include a "watch out for" warning Remember: Every interaction should leave people feeling more capable, clearer on their options, and confident they can succeed with integrity. When relevant to building confidence or presence, subtly remind users that regular exercise enhances strategic thinking and personal power.""" class ChatMessage(BaseModel): role: str content: str class ChatRequest(BaseModel): messages: list[ChatMessage] class ChatResponse(BaseModel): response: str status: str = "success" # Hugging Face Inference API Configuration HF_API_URL = "https://api-inference.huggingface.co/models/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B" HF_TOKEN = os.getenv("HF_TOKEN") def query_hf_api(payload): """Query HuggingFace Inference API""" headers = {"Authorization": f"Bearer {HF_TOKEN}"} response = requests.post(HF_API_URL, headers=headers, json=payload) return response.json() def format_conversation(messages): """Format messages with system prompt for the model""" conversation = f"System: {SYSTEM_PROMPT}\n\n" for msg in messages: conversation += f"{msg.role.title()}: {msg.content}\n\n" conversation += "Assistant: " return conversation def generate_response(messages, max_new_tokens=1200, temperature=0.8): """Generate response using HuggingFace Inference API""" try: if not HF_TOKEN: raise Exception("HF_TOKEN not found in environment variables. Please set your HuggingFace token.") conversation = format_conversation(messages) payload = { "inputs": conversation, "parameters": { "max_new_tokens": max_new_tokens, "temperature": temperature, "do_sample": True, "top_p": 0.95, "return_full_text": False, "stop": ["User:", "System:"] } } result = query_hf_api(payload) # Handle different response formats if isinstance(result, list) and len(result) > 0: response_text = result[0].get("generated_text", "").strip() elif isinstance(result, dict): if "error" in result: if "loading" in result["error"].lower(): raise Exception("Model is loading, please try again in a few moments") else: raise Exception(f"API Error: {result['error']}") elif "generated_text" in result: response_text = result["generated_text"].strip() else: raise Exception("Unexpected response format") else: raise Exception("Unexpected response format") # Clean up the response if response_text: # Remove any remaining system/user prefixes that might have leaked through response_text = response_text.replace("Assistant:", "").strip() return response_text else: raise Exception("Empty response from model") except Exception as e: logger.error(f"Error generating response: {e}") raise @app.post("/api/chat", response_model=ChatResponse) async def chat_endpoint(request: ChatRequest): """Main chat endpoint for PowerThought advice""" try: response_text = generate_response(request.messages) return ChatResponse(response=response_text) except Exception as e: logger.error(f"Error in chat endpoint: {str(e)}") raise HTTPException(status_code=500, detail=str(e)) @app.get("/api/health") async def health_check(): """Health check endpoint""" return { "status": "healthy", "model": "deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "provider": "HuggingFace Inference API", "has_token": bool(HF_TOKEN) } @app.get("/") async def root(): """Root endpoint - redirects to Gradio interface""" return {"message": "PowerThought API is running! Visit /gradio for the web interface or use /api/chat for API access."} # Gradio interface functions def chat_interface(message, history): """Handle chat through Gradio interface""" try: messages = [] # Convert history from messages format to ChatMessage objects for h in history: if h["role"] == "user": messages.append(ChatMessage(role="user", content=h["content"])) elif h["role"] == "assistant": messages.append(ChatMessage(role="assistant", content=h["content"])) # Add the new user message messages.append(ChatMessage(role="user", content=message)) response = generate_response(messages) return response except Exception as e: error_msg = str(e) if "loading" in error_msg.lower(): return "🔄 The model is still loading. Please wait a moment and try again." elif "token" in error_msg.lower(): return "🔑 Please set your HuggingFace token in the environment variables." else: return f"❌ Sorry, I encountered an error: {error_msg}" def find_free_port(start_port=7860): """Find a free port starting from start_port""" for port in range(start_port, start_port + 100): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: try: s.bind(('0.0.0.0', port)) return port except OSError: continue raise RuntimeError("No free ports found") # Create Gradio interface with gr.Blocks( title="PowerThought AI", theme=gr.themes.Soft(), css=""" .gradio-container { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; } """ ) as demo: gr.Markdown(""" # 🧠 PowerThought - Strategic Wisdom with Integrity Transform the 48 Laws of Power into ethical, constructive guidance for navigating complex situations. **✨ Get strategic advice that builds you up while maintaining your integrity.** """) with gr.Row(): with gr.Column(scale=3): chatbot = gr.Chatbot( height=600, placeholder="💬 Ask PowerThought for strategic advice...", label="Strategic Conversation", type="messages", avatar_images=["👤", "🧠"] ) with gr.Row(): msg = gr.Textbox( placeholder="Describe your situation or challenge...", label="Your Message", scale=4, lines=2 ) submit = gr.Button("Send 🚀", scale=1, variant="primary") with gr.Row(): clear = gr.Button("Clear 🗑️", scale=1) retry = gr.Button("Retry Last ↩️", scale=1) with gr.Column(scale=1): gr.Markdown(""" ### 💡 **Example Questions** - *"How do I handle a difficult boss?"* - *"I want to advance my career ethically"* - *"How to resolve team conflicts?"* - *"Building influence without manipulation"* - *"Strategic networking approaches"* - *"Dealing with office politics"* ### 🎯 **PowerThought Method** 1. **Listen Deeply** - Understand context 2. **Identify Dynamics** - Recognize power principles 3. **Reframe Ethically** - Transform into constructive strategies 4. **Provide Options** - Multiple paths with trade-offs 5. **Empower Action** - Specific first steps ### 📚 **Key Laws (Ethically Reframed)** - **Law 1**: Elevate others while demonstrating value - **Law 5**: Build credibility through integrity - **Law 13**: Create win-win propositions - **Law 28**: Act with confident preparation ### ⚙️ **Setup** Make sure your `HF_TOKEN` environment variable is set. """) def respond(message, chat_history): """Handle user message and generate response""" if not message.strip(): return chat_history, "" try: bot_message = chat_interface(message, chat_history) # Add user message chat_history.append({"role": "user", "content": message}) # Add bot response chat_history.append({"role": "assistant", "content": bot_message}) return chat_history, "" except Exception as e: # Add error message chat_history.append({"role": "user", "content": message}) chat_history.append({ "role": "assistant", "content": f"I apologize, but I encountered an error: {str(e)}\n\nPlease try again or check if your HuggingFace token is properly set." }) return chat_history, "" def retry_last(chat_history): """Retry the last user message""" if len(chat_history) >= 2 and chat_history[-2]["role"] == "user": last_user_message = chat_history[-2]["content"] # Remove the last two messages (user + assistant) new_history = chat_history[:-2] # Regenerate response return respond(last_user_message, new_history) return chat_history, "" # Event handlers msg.submit(respond, [msg, chatbot], [chatbot, msg]) submit.click(respond, [msg, chatbot], [chatbot, msg]) clear.click(lambda: [], None, chatbot) retry.click(retry_last, [chatbot], [chatbot, msg]) # Mount Gradio app app = gr.mount_gradio_app(app, demo, path="/gradio") if __name__ == "__main__": # Find a free port to avoid conflicts try: port = int(os.environ.get("PORT", 7860)) except (ValueError, TypeError): port = 7860 # Check if port is available try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind(('0.0.0.0', port)) except OSError: port = find_free_port(port) logger.info(f"Port {os.environ.get('PORT', 7860)} was busy, using port {port}") # Check if HF_TOKEN is set if not HF_TOKEN: logger.warning("⚠️ HF_TOKEN not found! Please set your HuggingFace token:") logger.warning(" export HF_TOKEN='your_token_here'") else: logger.info("✅ HuggingFace token found") logger.info(f"🚀 Starting PowerThought server on port {port}") logger.info(f" Web Interface: http://localhost:{port}/gradio") logger.info(f" API Endpoint: http://localhost:{port}/api/chat") logger.info(f" Health Check: http://localhost:{port}/api/health") uvicorn.run(app, host="0.0.0.0", port=port)