Spaces:
Sleeping
Sleeping
Commit
·
2ca8db5
0
Parent(s):
first commit
Browse files- .gitignore +32 -0
- Dockerfile +28 -0
- README.md +96 -0
- docker-compose.yml +8 -0
- main.py +253 -0
- models.py +76 -0
- requirements.txt +5 -0
.gitignore
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Environment variables
|
2 |
+
.env
|
3 |
+
|
4 |
+
# Python artifacts
|
5 |
+
__pycache__/
|
6 |
+
*.pyc
|
7 |
+
*.pyo
|
8 |
+
*.pyd
|
9 |
+
.Python
|
10 |
+
build/
|
11 |
+
develop-eggs/
|
12 |
+
dist/
|
13 |
+
downloads/
|
14 |
+
eggs/
|
15 |
+
.eggs/
|
16 |
+
lib/
|
17 |
+
lib64/
|
18 |
+
parts/
|
19 |
+
sdist/
|
20 |
+
var/
|
21 |
+
wheels/
|
22 |
+
share/python-wheels/
|
23 |
+
*.egg-info/
|
24 |
+
.installed.cfg
|
25 |
+
*.egg
|
26 |
+
MANIFEST
|
27 |
+
|
28 |
+
# Virtual environment
|
29 |
+
.venv
|
30 |
+
venv/
|
31 |
+
ENV/
|
32 |
+
env/
|
Dockerfile
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Use an official Python runtime as a parent image
|
2 |
+
FROM python:3.10-slim
|
3 |
+
|
4 |
+
# Set the working directory in the container
|
5 |
+
WORKDIR /app
|
6 |
+
|
7 |
+
# Copy the requirements file into the container at /app
|
8 |
+
COPY requirements.txt .
|
9 |
+
|
10 |
+
# Install any needed packages specified in requirements.txt
|
11 |
+
# Use --no-cache-dir to reduce image size
|
12 |
+
# Use --upgrade to ensure latest versions are installed
|
13 |
+
RUN pip install --no-cache-dir --upgrade -r requirements.txt
|
14 |
+
|
15 |
+
# Copy the current directory contents into the container at /app
|
16 |
+
COPY main.py .
|
17 |
+
COPY models.py .
|
18 |
+
|
19 |
+
# Make port 8000 available to the world outside this container
|
20 |
+
EXPOSE 7860
|
21 |
+
|
22 |
+
# Define environment variables (placeholders, will be set at runtime)
|
23 |
+
ENV NOTION_COOKIE=""
|
24 |
+
ENV NOTION_SPACE_ID=""
|
25 |
+
|
26 |
+
# Run uvicorn when the container launches
|
27 |
+
# Use 0.0.0.0 to make it accessible externally
|
28 |
+
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
|
README.md
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Notion API Bridge
|
3 |
+
emoji: 🌉
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: purple
|
6 |
+
sdk: docker
|
7 |
+
app_port: 7860
|
8 |
+
pinned: false
|
9 |
+
license: mit # Or choose another appropriate license if preferred
|
10 |
+
# Add any other relevant tags or configuration if needed
|
11 |
+
---
|
12 |
+
# OpenAI to Notion API Bridge
|
13 |
+
|
14 |
+
This project provides a FastAPI application that acts as a bridge between OpenAI-compatible API calls and the Notion API, allowing you to interact with Notion using standard OpenAI tools and libraries.
|
15 |
+
|
16 |
+
## Environment Variables
|
17 |
+
|
18 |
+
The application requires the following environment variables to be set:
|
19 |
+
|
20 |
+
* `NOTION_COOKIE`: Your Notion `token_v2` cookie value. This is used for authentication with the Notion API. You can typically find this in your browser's developer tools while logged into Notion.
|
21 |
+
* `NOTION_SPACE_ID`: The ID of your Notion workspace. You can usually find this in the URL when browsing your Notion workspace (it's the part after your domain and before the first page ID, often a UUID).
|
22 |
+
|
23 |
+
## Running Locally (without Docker)
|
24 |
+
|
25 |
+
1. Ensure you have Python 3.10+ installed.
|
26 |
+
2. Install dependencies:
|
27 |
+
```bash
|
28 |
+
pip install -r requirements.txt
|
29 |
+
```
|
30 |
+
3. Create a `.env` file in the project root with your `NOTION_COOKIE` and `NOTION_SPACE_ID`:
|
31 |
+
```dotenv
|
32 |
+
NOTION_COOKIE="your_cookie_value_here"
|
33 |
+
NOTION_SPACE_ID="your_space_id_here"
|
34 |
+
```
|
35 |
+
4. Run the application using Uvicorn:
|
36 |
+
```bash
|
37 |
+
uvicorn main:app --reload --port 7860
|
38 |
+
```
|
39 |
+
The server will be available at `http://localhost:7860`.
|
40 |
+
|
41 |
+
## Running with Docker Compose (Recommended for Local Dev)
|
42 |
+
|
43 |
+
This method uses the `docker-compose.yml` file for a streamlined local development setup. It automatically builds the image if needed and loads environment variables directly from your `.env` file.
|
44 |
+
|
45 |
+
1. Ensure you have Docker and Docker Compose installed.
|
46 |
+
2. Make sure your `.env` file exists in the project root with your `NOTION_COOKIE` and `NOTION_SPACE_ID`.
|
47 |
+
3. Run the following command in the project root:
|
48 |
+
```bash
|
49 |
+
docker-compose up --build -d
|
50 |
+
```
|
51 |
+
* `--build`: Rebuilds the image if the `Dockerfile` or context has changed.
|
52 |
+
* `-d`: Runs the container in detached mode (in the background).
|
53 |
+
4. The application will be accessible locally at `http://localhost:8139`.
|
54 |
+
|
55 |
+
To stop the service, run:
|
56 |
+
```bash
|
57 |
+
docker-compose down
|
58 |
+
```
|
59 |
+
|
60 |
+
## Running with Docker Command (Manual)
|
61 |
+
|
62 |
+
This method involves building and running the Docker container manually, passing environment variables directly in the command.
|
63 |
+
|
64 |
+
1. **Build the Docker image:**
|
65 |
+
```bash
|
66 |
+
docker build -t notion-api-bridge .
|
67 |
+
```
|
68 |
+
2. **Run the Docker container:**
|
69 |
+
Replace `"your_cookie_value"` and `"your_space_id"` with your actual Notion credentials.
|
70 |
+
```bash
|
71 |
+
docker run -p 7860:7860 \
|
72 |
+
-e NOTION_COOKIE="your_cookie_value" \
|
73 |
+
-e NOTION_SPACE_ID="your_space_id" \
|
74 |
+
notion-api-bridge
|
75 |
+
```
|
76 |
+
The server will be available at `http://localhost:7860` (or whichever host port you mapped to the container's 7860).
|
77 |
+
|
78 |
+
## Deploying to Hugging Face Spaces
|
79 |
+
|
80 |
+
This application is designed to be easily deployed as a Docker Space on Hugging Face.
|
81 |
+
|
82 |
+
1. **Create a new Space:** Go to Hugging Face and create a new Space, selecting "Docker" as the Space SDK. Choose a name (e.g., `notion-api-bridge`).
|
83 |
+
2. **Upload Files:** Upload the `Dockerfile`, `main.py`, `models.py`, and `requirements.txt` to your Space repository. You can do this via the web interface or by cloning the repository and pushing the files. **Do not upload your `.env` file.**
|
84 |
+
3. **Add Secrets:** In your Space settings, navigate to the "Secrets" section. Add two secrets:
|
85 |
+
* `NOTION_COOKIE`: Paste your Notion `token_v2` cookie value.
|
86 |
+
* `NOTION_SPACE_ID`: Paste your Notion Space ID.
|
87 |
+
Hugging Face will securely inject these secrets as environment variables into your running container.
|
88 |
+
4. **Deployment:** Hugging Face Spaces will automatically build the Docker image from your `Dockerfile` and run the container. It detects applications running on port 7860 (as specified in the `Dockerfile` and metadata).
|
89 |
+
5. **Accessing the API:** Once the Space is running, you can access the API endpoint at the Space's public URL. For example, if your Space URL is `https://your-username-your-space-name.hf.space`, you can send requests like:
|
90 |
+
```bash
|
91 |
+
curl -X POST https://your-username-your-space-name.hf.space/v1/chat/completions \
|
92 |
+
-H "Content-Type: application/json" \
|
93 |
+
-d '{
|
94 |
+
"model": "gpt-3.5-turbo", # Model name is often ignored by bridge, but required by spec
|
95 |
+
"messages": [{"role": "user", "content": "Add a new page titled '\''Meeting Notes'\'' with content '\''Discuss project roadmap'\''"}]
|
96 |
+
}'
|
docker-compose.yml
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
version: '3.8'
|
2 |
+
services:
|
3 |
+
notion-bridge:
|
4 |
+
build: .
|
5 |
+
ports:
|
6 |
+
- "8139:7860" # Map host port 8139 to container port 7860
|
7 |
+
env_file:
|
8 |
+
- .env
|
main.py
ADDED
@@ -0,0 +1,253 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import uuid
|
3 |
+
import json
|
4 |
+
import time
|
5 |
+
import httpx
|
6 |
+
from fastapi import FastAPI, Request, HTTPException
|
7 |
+
from fastapi.responses import StreamingResponse
|
8 |
+
from dotenv import load_dotenv
|
9 |
+
from .models import (
|
10 |
+
ChatMessage, ChatCompletionRequest, NotionTranscriptConfigValue,
|
11 |
+
NotionTranscriptItem, NotionDebugOverrides, NotionRequestBody,
|
12 |
+
ChoiceDelta, Choice, ChatCompletionChunk, Model, ModelList
|
13 |
+
)
|
14 |
+
|
15 |
+
# Load environment variables from .env file
|
16 |
+
load_dotenv()
|
17 |
+
|
18 |
+
# --- Configuration ---
|
19 |
+
NOTION_API_URL = "https://www.notion.so/api/v3/runInferenceTranscript"
|
20 |
+
# IMPORTANT: Load the Notion cookie securely from environment variables
|
21 |
+
NOTION_COOKIE = os.getenv("NOTION_COOKIE")
|
22 |
+
|
23 |
+
NOTION_SPACE_ID = os.getenv("NOTION_SPACE_ID")
|
24 |
+
if not NOTION_COOKIE:
|
25 |
+
print("Error: NOTION_COOKIE environment variable not set.")
|
26 |
+
# Consider raising HTTPException or exiting in a real app
|
27 |
+
if not NOTION_SPACE_ID:
|
28 |
+
print("Warning: NOTION_SPACE_ID environment variable not set. Using a default UUID.")
|
29 |
+
# Using a default might not be ideal, depends on Notion's behavior
|
30 |
+
# Consider raising an error instead: raise ValueError("NOTION_SPACE_ID not set")
|
31 |
+
NOTION_SPACE_ID = str(uuid.uuid4()) # Default or raise error
|
32 |
+
|
33 |
+
# --- FastAPI App ---
|
34 |
+
app = FastAPI()
|
35 |
+
|
36 |
+
# --- Helper Functions ---
|
37 |
+
|
38 |
+
def build_notion_request(request_data: ChatCompletionRequest) -> NotionRequestBody:
|
39 |
+
"""Transforms OpenAI-style messages to Notion transcript format."""
|
40 |
+
transcript = [
|
41 |
+
NotionTranscriptItem(
|
42 |
+
type="config",
|
43 |
+
value=NotionTranscriptConfigValue(model=request_data.notion_model)
|
44 |
+
)
|
45 |
+
]
|
46 |
+
for message in request_data.messages:
|
47 |
+
# Map 'assistant' role to 'markdown-chat', all others to 'user'
|
48 |
+
if message.role == "assistant":
|
49 |
+
# Notion uses "markdown-chat" for assistant replies in the transcript history
|
50 |
+
transcript.append(NotionTranscriptItem(type="markdown-chat", value=message.content))
|
51 |
+
else:
|
52 |
+
# Map user, system, and any other potential roles to 'user'
|
53 |
+
transcript.append(NotionTranscriptItem(type="user", value=[[message.content]]))
|
54 |
+
|
55 |
+
# Use globally configured spaceId, set createThread=True
|
56 |
+
return NotionRequestBody(
|
57 |
+
spaceId=NOTION_SPACE_ID, # From environment variable
|
58 |
+
transcript=transcript,
|
59 |
+
createThread=True, # Always create a new thread
|
60 |
+
# Generate a new traceId for each request
|
61 |
+
traceId=str(uuid.uuid4()),
|
62 |
+
# Explicitly set debugOverrides, generateTitle, and saveAllThreadOperations
|
63 |
+
debugOverrides=NotionDebugOverrides(
|
64 |
+
cachedInferences={},
|
65 |
+
annotationInferences={},
|
66 |
+
emitInferences=False
|
67 |
+
),
|
68 |
+
generateTitle=False,
|
69 |
+
saveAllThreadOperations=False
|
70 |
+
)
|
71 |
+
|
72 |
+
async def stream_notion_response(notion_request_body: NotionRequestBody):
|
73 |
+
"""Streams the request to Notion and yields OpenAI-compatible SSE chunks."""
|
74 |
+
headers = {
|
75 |
+
'accept': 'application/x-ndjson',
|
76 |
+
'accept-language': 'en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,zh-TW;q=0.6,ja;q=0.5',
|
77 |
+
'content-type': 'application/json',
|
78 |
+
'notion-audit-log-platform': 'web',
|
79 |
+
'notion-client-version': '23.13.0.3604', # Consider making this configurable
|
80 |
+
'origin': 'https://www.notion.so',
|
81 |
+
'priority': 'u=1, i',
|
82 |
+
# Referer might be optional or need adjustment. Removing threadId part.
|
83 |
+
'referer': 'https://www.notion.so',
|
84 |
+
'sec-ch-ua': '"Chromium";v="136", "Google Chrome";v="136", "Not.A/Brand";v="99"',
|
85 |
+
'sec-ch-ua-mobile': '?0',
|
86 |
+
'sec-ch-ua-platform': '"Windows"',
|
87 |
+
'sec-fetch-dest': 'empty',
|
88 |
+
'sec-fetch-mode': 'cors',
|
89 |
+
'sec-fetch-site': 'same-origin',
|
90 |
+
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36',
|
91 |
+
'cookie': NOTION_COOKIE # Loaded from .env
|
92 |
+
}
|
93 |
+
|
94 |
+
chunk_id = f"chatcmpl-{uuid.uuid4()}"
|
95 |
+
created_time = int(time.time())
|
96 |
+
|
97 |
+
try:
|
98 |
+
async with httpx.AsyncClient(timeout=None) as client: # No timeout for streaming
|
99 |
+
async with client.stream("POST", NOTION_API_URL, json=notion_request_body.dict(), headers=headers) as response:
|
100 |
+
if response.status_code != 200:
|
101 |
+
error_content = await response.aread()
|
102 |
+
print(f"Error from Notion API: {response.status_code}")
|
103 |
+
print(f"Response: {error_content.decode()}")
|
104 |
+
# Yield an error message in SSE format? Or just raise exception?
|
105 |
+
# For now, raise internal server error in the endpoint
|
106 |
+
raise HTTPException(status_code=response.status_code, detail=f"Notion API Error: {error_content.decode()}")
|
107 |
+
|
108 |
+
async for line in response.aiter_lines():
|
109 |
+
if not line.strip():
|
110 |
+
continue
|
111 |
+
try:
|
112 |
+
data = json.loads(line)
|
113 |
+
# Check if it's the type of message containing text chunks
|
114 |
+
if data.get("type") == "markdown-chat" and isinstance(data.get("value"), str):
|
115 |
+
content_chunk = data["value"]
|
116 |
+
if content_chunk: # Only send if there's content
|
117 |
+
chunk = ChatCompletionChunk(
|
118 |
+
id=chunk_id,
|
119 |
+
created=created_time,
|
120 |
+
choices=[Choice(delta=ChoiceDelta(content=content_chunk))]
|
121 |
+
)
|
122 |
+
yield f"data: {chunk.json()}\n\n"
|
123 |
+
# Add logic here to detect the end of the stream if Notion has a specific marker
|
124 |
+
# For now, we assume markdown-chat stops when the main content is done.
|
125 |
+
# If we see a recordMap, it's definitely past the text stream.
|
126 |
+
elif "recordMap" in data:
|
127 |
+
print("Detected recordMap, stopping stream.")
|
128 |
+
break # Stop processing after recordMap
|
129 |
+
|
130 |
+
except json.JSONDecodeError:
|
131 |
+
print(f"Warning: Could not decode JSON line: {line}")
|
132 |
+
except Exception as e:
|
133 |
+
print(f"Error processing line: {line} - {e}")
|
134 |
+
# Decide if we should continue or stop
|
135 |
+
|
136 |
+
# Send the final chunk indicating stop
|
137 |
+
final_chunk = ChatCompletionChunk(
|
138 |
+
id=chunk_id,
|
139 |
+
created=created_time,
|
140 |
+
choices=[Choice(delta=ChoiceDelta(), finish_reason="stop")]
|
141 |
+
)
|
142 |
+
yield f"data: {final_chunk.json()}\n\n"
|
143 |
+
yield "data: [DONE]\n\n"
|
144 |
+
|
145 |
+
except httpx.RequestError as e:
|
146 |
+
print(f"HTTPX Request Error: {e}")
|
147 |
+
# Yield an error message or handle in the endpoint
|
148 |
+
# For now, let the endpoint handle it
|
149 |
+
raise HTTPException(status_code=500, detail=f"Error connecting to Notion API: {e}")
|
150 |
+
except Exception as e:
|
151 |
+
print(f"Unexpected error during streaming: {e}")
|
152 |
+
# Yield an error message or handle in the endpoint
|
153 |
+
raise HTTPException(status_code=500, detail=f"Internal server error during streaming: {e}")
|
154 |
+
|
155 |
+
|
156 |
+
# --- API Endpoint ---
|
157 |
+
|
158 |
+
@app.get("/v1/models", response_model=ModelList)
|
159 |
+
async def list_models():
|
160 |
+
"""
|
161 |
+
Endpoint to list available Notion models, mimicking OpenAI's /v1/models.
|
162 |
+
"""
|
163 |
+
available_models = [
|
164 |
+
"openai-gpt-4.1",
|
165 |
+
"anthropic-opus-4",
|
166 |
+
"anthropic-sonnet-4"
|
167 |
+
]
|
168 |
+
model_list = [
|
169 |
+
Model(id=model_id, owned_by="notion") # created uses default_factory
|
170 |
+
for model_id in available_models
|
171 |
+
]
|
172 |
+
return ModelList(data=model_list)
|
173 |
+
@app.post("/v1/chat/completions")
|
174 |
+
async def chat_completions(request_data: ChatCompletionRequest, request: Request):
|
175 |
+
"""
|
176 |
+
Endpoint to mimic OpenAI's chat completions, proxying to Notion.
|
177 |
+
"""
|
178 |
+
if not NOTION_COOKIE:
|
179 |
+
raise HTTPException(status_code=500, detail="Server configuration error: Notion cookie not set.")
|
180 |
+
|
181 |
+
notion_request_body = build_notion_request(request_data)
|
182 |
+
|
183 |
+
if request_data.stream:
|
184 |
+
return StreamingResponse(
|
185 |
+
stream_notion_response(notion_request_body),
|
186 |
+
media_type="text/event-stream"
|
187 |
+
)
|
188 |
+
else:
|
189 |
+
# --- Non-Streaming Logic (Optional - Collects stream internally) ---
|
190 |
+
# Note: The primary goal is streaming, but a non-streaming version
|
191 |
+
# might be useful for testing or simpler clients.
|
192 |
+
# This requires collecting all chunks from the async generator.
|
193 |
+
full_response_content = ""
|
194 |
+
final_finish_reason = None
|
195 |
+
chunk_id = f"chatcmpl-{uuid.uuid4()}" # Generate ID for the non-streamed response
|
196 |
+
created_time = int(time.time())
|
197 |
+
|
198 |
+
try:
|
199 |
+
async for line in stream_notion_response(notion_request_body):
|
200 |
+
if line.startswith("data: ") and "[DONE]" not in line:
|
201 |
+
try:
|
202 |
+
data_json = line[len("data: "):].strip()
|
203 |
+
if data_json:
|
204 |
+
chunk_data = json.loads(data_json)
|
205 |
+
if chunk_data.get("choices"):
|
206 |
+
delta = chunk_data["choices"][0].get("delta", {})
|
207 |
+
content = delta.get("content")
|
208 |
+
if content:
|
209 |
+
full_response_content += content
|
210 |
+
finish_reason = chunk_data["choices"][0].get("finish_reason")
|
211 |
+
if finish_reason:
|
212 |
+
final_finish_reason = finish_reason
|
213 |
+
except json.JSONDecodeError:
|
214 |
+
print(f"Warning: Could not decode JSON line in non-streaming mode: {line}")
|
215 |
+
|
216 |
+
# Construct the final OpenAI-compatible non-streaming response
|
217 |
+
return {
|
218 |
+
"id": chunk_id,
|
219 |
+
"object": "chat.completion",
|
220 |
+
"created": created_time,
|
221 |
+
"model": request_data.model, # Return the model requested by the client
|
222 |
+
"choices": [
|
223 |
+
{
|
224 |
+
"index": 0,
|
225 |
+
"message": {
|
226 |
+
"role": "assistant",
|
227 |
+
"content": full_response_content,
|
228 |
+
},
|
229 |
+
"finish_reason": final_finish_reason or "stop", # Default to stop if not explicitly set
|
230 |
+
}
|
231 |
+
],
|
232 |
+
"usage": { # Note: Token usage is not available from Notion
|
233 |
+
"prompt_tokens": None,
|
234 |
+
"completion_tokens": None,
|
235 |
+
"total_tokens": None,
|
236 |
+
},
|
237 |
+
}
|
238 |
+
except HTTPException as e:
|
239 |
+
# Re-raise HTTP exceptions from the streaming function
|
240 |
+
raise e
|
241 |
+
except Exception as e:
|
242 |
+
print(f"Error during non-streaming processing: {e}")
|
243 |
+
raise HTTPException(status_code=500, detail="Internal server error processing Notion response")
|
244 |
+
|
245 |
+
|
246 |
+
# --- Uvicorn Runner ---
|
247 |
+
# Allows running with `python main.py` for simple testing,
|
248 |
+
# but `uvicorn main:app --reload` is recommended for development.
|
249 |
+
if __name__ == "__main__":
|
250 |
+
import uvicorn
|
251 |
+
print("Starting server. Access at http://127.0.0.1:7860")
|
252 |
+
print("Ensure NOTION_COOKIE is set in your .env file or environment.")
|
253 |
+
uvicorn.run(app, host="127.0.0.1", port=7860)
|
models.py
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import time
|
2 |
+
import uuid
|
3 |
+
from pydantic import BaseModel, Field
|
4 |
+
from typing import List, Optional, Dict, Any, Literal, Union
|
5 |
+
|
6 |
+
# --- Models Moved from main.py ---
|
7 |
+
|
8 |
+
# Input Models (OpenAI-like)
|
9 |
+
class ChatMessage(BaseModel):
|
10 |
+
role: Literal["system", "user", "assistant"]
|
11 |
+
content: str
|
12 |
+
|
13 |
+
class ChatCompletionRequest(BaseModel):
|
14 |
+
messages: List[ChatMessage]
|
15 |
+
model: str = "notion-proxy" # Model name can be passed, but we map to Notion's model
|
16 |
+
stream: bool = False
|
17 |
+
# Add other potential OpenAI params if needed, though they might not map directly
|
18 |
+
# max_tokens: Optional[int] = None
|
19 |
+
# temperature: Optional[float] = None
|
20 |
+
# space_id and thread_id are now handled globally via environment variables
|
21 |
+
notion_model: str = "anthropic-opus-4" # Default Notion model, can be overridden
|
22 |
+
|
23 |
+
|
24 |
+
# Notion Models
|
25 |
+
class NotionTranscriptConfigValue(BaseModel):
|
26 |
+
type: str = "markdown-chat"
|
27 |
+
model: str # e.g., "anthropic-opus-4"
|
28 |
+
|
29 |
+
class NotionTranscriptItem(BaseModel):
|
30 |
+
type: Literal["config", "user", "markdown-chat"]
|
31 |
+
value: Union[List[List[str]], str, NotionTranscriptConfigValue]
|
32 |
+
|
33 |
+
class NotionDebugOverrides(BaseModel):
|
34 |
+
cachedInferences: Dict = Field(default_factory=dict)
|
35 |
+
annotationInferences: Dict = Field(default_factory=dict)
|
36 |
+
emitInferences: bool = False
|
37 |
+
|
38 |
+
class NotionRequestBody(BaseModel):
|
39 |
+
traceId: str = Field(default_factory=lambda: str(uuid.uuid4()))
|
40 |
+
spaceId: str
|
41 |
+
transcript: List[NotionTranscriptItem]
|
42 |
+
# threadId is removed, createThread will be set to true
|
43 |
+
createThread: bool = True
|
44 |
+
debugOverrides: NotionDebugOverrides = Field(default_factory=NotionDebugOverrides)
|
45 |
+
generateTitle: bool = False
|
46 |
+
saveAllThreadOperations: bool = True
|
47 |
+
|
48 |
+
|
49 |
+
# Output Models (OpenAI SSE)
|
50 |
+
class ChoiceDelta(BaseModel):
|
51 |
+
content: Optional[str] = None
|
52 |
+
|
53 |
+
class Choice(BaseModel):
|
54 |
+
index: int = 0
|
55 |
+
delta: ChoiceDelta
|
56 |
+
finish_reason: Optional[Literal["stop", "length"]] = None
|
57 |
+
|
58 |
+
class ChatCompletionChunk(BaseModel):
|
59 |
+
id: str = Field(default_factory=lambda: f"chatcmpl-{uuid.uuid4()}")
|
60 |
+
object: str = "chat.completion.chunk"
|
61 |
+
created: int = Field(default_factory=lambda: int(time.time()))
|
62 |
+
model: str = "notion-proxy" # Or could reflect the underlying Notion model
|
63 |
+
choices: List[Choice]
|
64 |
+
|
65 |
+
|
66 |
+
# --- Models for /v1/models Endpoint ---
|
67 |
+
|
68 |
+
class Model(BaseModel):
|
69 |
+
id: str
|
70 |
+
object: str = "model"
|
71 |
+
created: int = Field(default_factory=lambda: int(time.time()))
|
72 |
+
owned_by: str = "notion" # Or specify based on actual model origin if needed
|
73 |
+
|
74 |
+
class ModelList(BaseModel):
|
75 |
+
object: str = "list"
|
76 |
+
data: List[Model]
|
requirements.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
fastapi
|
2 |
+
uvicorn[standard]
|
3 |
+
httpx
|
4 |
+
pydantic
|
5 |
+
python-dotenv
|