Skip to content

Commit 0d3a6e6

Browse files
committed
feat: split deployment — frontend on Render, agent on LangGraph Platform
Remove agent service from render.yaml (frontend-only on Render), switch LANGGRAPH_DEPLOYMENT_URL from Render service discovery to manually-set env var pointing to LangGraph Platform. Fix bool("false")==True bug in LANGGRAPH_CLOUD detection, add startup log for platform mode. Rewrite deployment docs with langgraph deploy CLI workflow and split deployment guide.
1 parent 6ecca78 commit 0d3a6e6

5 files changed

Lines changed: 117 additions & 110 deletions

File tree

.env.example

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ LLM_MODEL=gpt-5.4-2026-03-05
99
# LANGCHAIN_TRACING_V2=true
1010
# LANGCHAIN_PROJECT=open-generative-ui
1111
# LANGSMITH_API_KEY=
12+
# ^ Also required on the frontend for cloud deployments (sent as x-api-key header)
1213

1314
# Rate limiting (per IP) — disabled by default
1415
RATE_LIMIT_ENABLED=false

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,3 +63,7 @@ bun.lockb
6363

6464
# References
6565
.references
66+
67+
# Auto-generated by Chalk
68+
.chalk/context/
69+
.chalk/skills/*.enabled

apps/agent/main.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
It defines the workflow graph, state, tools, nodes and edges.
44
"""
55

6+
import logging
67
import os
78
import warnings
89
from pathlib import Path
@@ -22,11 +23,16 @@
2223

2324
load_dotenv()
2425

26+
logger = logging.getLogger(__name__)
27+
2528
# LangGraph Platform provides a managed Postgres-backed checkpointer,
2629
# so BoundedMemorySaver is only needed for self-hosted / local dev.
27-
_on_langgraph_platform = bool(os.environ.get("LANGGRAPH_CLOUD"))
30+
_on_langgraph_platform = os.environ.get("LANGGRAPH_CLOUD", "").lower() == "true"
2831
checkpointer = None if _on_langgraph_platform else BoundedMemorySaver(max_threads=200)
2932

33+
if _on_langgraph_platform:
34+
logger.info("LANGGRAPH_CLOUD=true — using platform-managed checkpointer")
35+
3036
agent = create_deep_agent(
3137
model=ChatOpenAI(model=os.environ.get("LLM_MODEL", "gpt-5.4-2026-03-05")),
3238
tools=[query_data, plan_visualization, *todo_tools, generate_form],

docs/deployment.md

Lines changed: 104 additions & 78 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,127 @@
11
# Deployment
22

3-
## Render
3+
## Render (Frontend Only)
44

5-
The project includes a `render.yaml` for one-click deployment to [Render](https://render.com/).
5+
The project includes a `render.yaml` that deploys the **frontend** to [Render](https://render.com/).
6+
The agent is deployed separately — see [LangGraph Platform](#langgraph-platform-langsmith-cloud) below.
67

7-
### Services
8+
### Frontend Service (Node)
89

9-
**Agent** (Python):
10-
- Runtime: Python 3.12.6
11-
- Build: `pip install uv && uv sync`
12-
- Start: `uv run uvicorn main:app --host 0.0.0.0 --port $PORT`
13-
- Health check: `GET /health`
14-
- Root directory: `apps/agent`
15-
16-
**Frontend** (Node):
1710
- Runtime: Node 22
1811
- Build: `corepack enable && pnpm install --no-frozen-lockfile && pnpm --filter @repo/app build`
1912
- Start: `pnpm --filter @repo/app start`
2013
- Health check: `GET /api/health`
21-
- Root directory: (repo root)
2214

2315
### Environment Variables
2416

25-
| Variable | Service | Required | Notes |
26-
|----------|---------|----------|-------|
27-
| `OPENAI_API_KEY` | Agent | Yes | Your OpenAI API key |
28-
| `LLM_MODEL` | Agent | No | Defaults to `gpt-5.4-2026-03-05` |
29-
| `LANGSMITH_API_KEY` | Agent | No | For LangSmith tracing |
30-
| `LANGGRAPH_DEPLOYMENT_URL` | Frontend | Auto | Injected from agent service via `fromService` |
31-
| `SKIP_INSTALL_DEPS` | Frontend | No | Set to `true` to skip redundant installs |
32-
33-
### Auto-Scaling
17+
| Variable | Required | Notes |
18+
|----------|----------|-------|
19+
| `LANGGRAPH_DEPLOYMENT_URL` | Yes | URL of your LangGraph agent (e.g., `https://<id>.default.us.langgraph.app`) |
20+
| `LANGSMITH_API_KEY` | Yes* | Required when agent is on LangGraph Platform (sent as `x-api-key`) |
21+
| `RATE_LIMIT_ENABLED` | No | Set to `true` to enable per-IP rate limiting |
22+
| `RATE_LIMIT_WINDOW_MS` | No | Rate limit window in ms (default: 60000) |
23+
| `RATE_LIMIT_MAX` | No | Max requests per window (default: 40) |
24+
| `SKIP_INSTALL_DEPS` | No | Set to `true` to skip redundant installs |
3425

35-
Both services are configured with:
36-
- Min instances: 1
37-
- Max instances: 3
38-
- Memory target: 80%
39-
- CPU target: 70%
26+
\*Not required if the agent is self-hosted on a private network without auth.
4027

4128
### Deploy
4229

4330
1. Fork the repository
4431
2. Create a new **Blueprint** on Render
4532
3. Connect your forked repo
46-
4. Add `OPENAI_API_KEY` as a secret
47-
5. Deploy
33+
4. In the Render dashboard, set `LANGGRAPH_DEPLOYMENT_URL` to your agent's URL
34+
5. Set `LANGSMITH_API_KEY` to your LangSmith API key
35+
6. Deploy
36+
37+
Render reads `render.yaml` and creates the frontend service.
38+
39+
## LangGraph Platform (LangSmith Cloud)
40+
41+
The agent can be deployed as a managed service on [LangGraph Platform](https://docs.langchain.com/langsmith/cli#deploy) using the `langgraph deploy` CLI. The platform provides built-in Postgres persistence, tracing, streaming, and auto-scaling.
42+
43+
### Prerequisites
44+
45+
- A [LangSmith](https://smith.langchain.com) account (Plus plan or higher)
46+
- LangSmith API key (`LANGSMITH_API_KEY`)
47+
- The `langgraph` CLI: `pip install langgraph-cli`
48+
- Docker installed and running (Apple Silicon users need Docker Buildx)
49+
50+
### Deploy
51+
52+
```bash
53+
cd apps/agent
54+
langgraph deploy --name open-generative-ui-agent
55+
```
56+
57+
The CLI reads `langgraph.json`, builds a Docker image, and pushes it to the managed registry. Auth is via `LANGSMITH_API_KEY` env var or `--api-key` flag.
58+
59+
After deployment, configure environment variables in the [LangSmith dashboard](https://smith.langchain.com):
60+
61+
| Variable | Required | Notes |
62+
|----------|----------|-------|
63+
| `OPENAI_API_KEY` | Yes | Your OpenAI API key |
64+
| `LANGGRAPH_CLOUD` | Yes | Set to `true` — skips local checkpointer in favor of platform-managed Postgres |
65+
| `LLM_MODEL` | No | Defaults to `gpt-5.4-2026-03-05` |
66+
| `LANGCHAIN_TRACING_V2` | No | Set to `true` for built-in tracing |
67+
| `LANGCHAIN_PROJECT` | No | Project name for organizing traces |
68+
69+
Note the deployment URL (e.g., `https://<id>.default.us.langgraph.app`).
70+
71+
### Other CLI Commands
72+
73+
| Command | Description |
74+
|---------|-------------|
75+
| `langgraph deploy list` | List all deployments |
76+
| `langgraph deploy logs` | Fetch runtime or build logs |
77+
| `langgraph deploy revisions list <id>` | Show revision history |
78+
| `langgraph dev` | Local dev server (no Docker) |
79+
80+
## Split Deployment: Render + LangGraph Platform
81+
82+
The recommended production setup runs the frontend on Render and the agent on LangGraph Platform. This gives you managed scaling and Postgres persistence for the agent, with familiar Node hosting for the frontend.
83+
84+
### Architecture
85+
86+
```
87+
┌──────────────────────┐ HTTPS + x-api-key ┌─────────────────────────┐
88+
│ Render (Frontend) │ ──────────────────────────▶ │ LangGraph Platform │
89+
│ Next.js on Node 22 │ │ (LangSmith Cloud) │
90+
│ /api/copilotkit │ ◀────────────────────────── │ Python agent │
91+
└──────────────────────┘ SSE / streaming └─────────────────────────┘
92+
```
93+
94+
### Step-by-Step
95+
96+
1. **Deploy the agent** on LangGraph Platform:
97+
```bash
98+
cd apps/agent
99+
langgraph deploy --name open-generative-ui-agent
100+
```
101+
Then set `OPENAI_API_KEY` and `LANGGRAPH_CLOUD=true` in the LangSmith dashboard.
102+
Note the deployment URL.
103+
104+
2. **Deploy the frontend** on Render following the [Render](#render-frontend-only) section.
105+
In the Render dashboard, set:
106+
- `LANGGRAPH_DEPLOYMENT_URL` = your LangGraph Platform URL (e.g., `https://<id>.default.us.langgraph.app`)
107+
- `LANGSMITH_API_KEY` = your LangSmith API key (starts with `lsv2_`)
108+
109+
3. **Verify** by hitting `GET /api/health` on the frontend and sending a test message through the UI.
110+
111+
### How Auth Works
48112

49-
Render reads `render.yaml` and creates both services. The frontend automatically gets the agent URL via service discovery.
113+
The frontend's API route (`/api/copilotkit`) reads `LANGSMITH_API_KEY` from the environment and sends it as an `x-api-key` header on every request to the agent. LangGraph Platform validates this key. The API key is never exposed to the browser — it stays server-side in the Next.js API route.
114+
115+
### Self-Hosted vs. LangGraph Platform
116+
117+
| Concern | Self-Hosted | LangGraph Platform |
118+
|---------|-------------|--------------------|
119+
| Checkpointer | BoundedMemorySaver (in-memory) | Managed Postgres (automatic) |
120+
| HTTP serving | FastAPI + uvicorn | Platform-managed |
121+
| Health checks | `/health` endpoint | Platform-managed |
122+
| Tracing | Optional (LANGSMITH_API_KEY) | Built-in |
123+
| Scaling | Manual / render.yaml | Platform-managed |
124+
| Auth | None (private network) | LANGSMITH_API_KEY required |
50125

51126
## General Deployment
52127

@@ -88,55 +163,6 @@ Requirements:
88163
| Agent | `GET /health` | `{"status": "ok"}` |
89164
| Frontend | `GET /api/health` | 200 OK |
90165

91-
## LangGraph Platform (LangSmith Cloud)
92-
93-
The agent can be deployed as a managed service on [LangGraph Platform](https://docs.smith.langchain.com/langgraph-platform/deployment) for built-in tracing, streaming, and persistence.
94-
95-
### Prerequisites
96-
97-
- A [LangSmith](https://smith.langchain.com) account (Plus plan or higher)
98-
- LangSmith API key
99-
- The `langgraph` CLI: `pip install langgraph-cli`
100-
101-
### Deploy
102-
103-
1. Connect your GitHub repo to LangSmith via **Deployments > + New Deployment**
104-
2. Set the root directory to `apps/agent`
105-
3. Configure environment variables in the LangSmith dashboard:
106-
107-
| Variable | Required | Notes |
108-
|----------|----------|-------|
109-
| `OPENAI_API_KEY` | Yes | Your OpenAI API key |
110-
| `LANGGRAPH_CLOUD` | Yes | Set to `true` — enables platform-native checkpointer |
111-
| `LLM_MODEL` | No | Defaults to `gpt-5.4-2026-03-05` |
112-
| `LANGCHAIN_TRACING_V2` | No | Set to `true` for built-in tracing |
113-
| `LANGCHAIN_PROJECT` | No | Project name for organizing traces |
114-
115-
4. Note the deployment URL (e.g., `https://<id>.default.us.langgraph.app`)
116-
117-
### Connect the Frontend
118-
119-
Set `LANGGRAPH_DEPLOYMENT_URL` and `LANGSMITH_API_KEY` on your frontend:
120-
121-
```bash
122-
LANGGRAPH_DEPLOYMENT_URL=https://<id>.default.us.langgraph.app \
123-
LANGSMITH_API_KEY=lsv2_... \
124-
pnpm --filter @repo/app start
125-
```
126-
127-
The frontend automatically sends `x-api-key` headers when `LANGSMITH_API_KEY` is set.
128-
129-
### Self-Hosted vs. LangGraph Platform
130-
131-
| Concern | Self-Hosted (Render) | LangGraph Platform |
132-
|---------|---------------------|--------------------|
133-
| Checkpointer | BoundedMemorySaver (in-memory) | Managed Postgres (automatic) |
134-
| HTTP serving | FastAPI + uvicorn | Platform-managed |
135-
| Health checks | `/health` endpoint | Platform-managed |
136-
| Tracing | Optional (LANGSMITH_API_KEY) | Built-in |
137-
| Scaling | render.yaml config | Platform-managed |
138-
| Auth | None (private network) | LANGSMITH_API_KEY required |
139-
140166
## Docker
141167

142168
A Dockerfile for the frontend is available at `docker/Dockerfile.app`. The agent can be containerized with a standard Python Dockerfile using `uv`.

render.yaml

Lines changed: 1 addition & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,4 @@
11
services:
2-
# ── Agent (LangGraph Python) — native Python with FastAPI + uvicorn ──
3-
- type: web
4-
name: open-generative-ui-agent
5-
runtime: python
6-
plan: starter
7-
rootDir: apps/agent
8-
buildCommand: pip install uv && uv sync
9-
startCommand: uv run uvicorn main:app --host 0.0.0.0 --port $PORT
10-
healthCheckPath: /health
11-
scaling:
12-
minInstances: 1
13-
maxInstances: 3
14-
targetMemoryPercent: 80
15-
targetCPUPercent: 70
16-
envVars:
17-
- key: PYTHON_VERSION
18-
value: "3.12.6"
19-
- key: OPENAI_API_KEY
20-
sync: false
21-
- key: LANGSMITH_API_KEY
22-
sync: false
23-
- key: LLM_MODEL
24-
value: gpt-5.4-2026-03-05
25-
buildFilter:
26-
paths:
27-
- apps/agent/**
28-
292
# ── Frontend (Next.js) — native Node runtime ──
303
- type: web
314
name: open-generative-ui-app
@@ -45,10 +18,7 @@ services:
4518
- key: SKIP_INSTALL_DEPS
4619
value: "true"
4720
- key: LANGGRAPH_DEPLOYMENT_URL
48-
fromService:
49-
name: open-generative-ui-agent
50-
type: web
51-
property: hostport
21+
sync: false
5222
- key: LANGSMITH_API_KEY
5323
sync: false
5424
- key: RATE_LIMIT_ENABLED

0 commit comments

Comments
 (0)