|
1 | 1 | # Deployment |
2 | 2 |
|
3 | | -## Render |
| 3 | +## Render (Frontend Only) |
4 | 4 |
|
5 | | -The project includes a `render.yaml` for one-click deployment to [Render](https://render.com/). |
| 5 | +The project includes a `render.yaml` that deploys the **frontend** to [Render](https://render.com/). |
| 6 | +The agent is deployed separately — see [LangGraph Platform](#langgraph-platform-langsmith-cloud) below. |
6 | 7 |
|
7 | | -### Services |
| 8 | +### Frontend Service (Node) |
8 | 9 |
|
9 | | -**Agent** (Python): |
10 | | -- Runtime: Python 3.12.6 |
11 | | -- Build: `pip install uv && uv sync` |
12 | | -- Start: `uv run uvicorn main:app --host 0.0.0.0 --port $PORT` |
13 | | -- Health check: `GET /health` |
14 | | -- Root directory: `apps/agent` |
15 | | - |
16 | | -**Frontend** (Node): |
17 | 10 | - Runtime: Node 22 |
18 | 11 | - Build: `corepack enable && pnpm install --no-frozen-lockfile && pnpm --filter @repo/app build` |
19 | 12 | - Start: `pnpm --filter @repo/app start` |
20 | 13 | - Health check: `GET /api/health` |
21 | | -- Root directory: (repo root) |
22 | 14 |
|
23 | 15 | ### Environment Variables |
24 | 16 |
|
25 | | -| Variable | Service | Required | Notes | |
26 | | -|----------|---------|----------|-------| |
27 | | -| `OPENAI_API_KEY` | Agent | Yes | Your OpenAI API key | |
28 | | -| `LLM_MODEL` | Agent | No | Defaults to `gpt-5.4-2026-03-05` | |
29 | | -| `LANGSMITH_API_KEY` | Agent | No | For LangSmith tracing | |
30 | | -| `LANGGRAPH_DEPLOYMENT_URL` | Frontend | Auto | Injected from agent service via `fromService` | |
31 | | -| `SKIP_INSTALL_DEPS` | Frontend | No | Set to `true` to skip redundant installs | |
32 | | - |
33 | | -### Auto-Scaling |
| 17 | +| Variable | Required | Notes | |
| 18 | +|----------|----------|-------| |
| 19 | +| `LANGGRAPH_DEPLOYMENT_URL` | Yes | URL of your LangGraph agent (e.g., `https://<id>.default.us.langgraph.app`) | |
| 20 | +| `LANGSMITH_API_KEY` | Yes* | Required when agent is on LangGraph Platform (sent as `x-api-key`) | |
| 21 | +| `RATE_LIMIT_ENABLED` | No | Set to `true` to enable per-IP rate limiting | |
| 22 | +| `RATE_LIMIT_WINDOW_MS` | No | Rate limit window in ms (default: 60000) | |
| 23 | +| `RATE_LIMIT_MAX` | No | Max requests per window (default: 40) | |
| 24 | +| `SKIP_INSTALL_DEPS` | No | Set to `true` to skip redundant installs | |
34 | 25 |
|
35 | | -Both services are configured with: |
36 | | -- Min instances: 1 |
37 | | -- Max instances: 3 |
38 | | -- Memory target: 80% |
39 | | -- CPU target: 70% |
| 26 | +\*Not required if the agent is self-hosted on a private network without auth. |
40 | 27 |
|
41 | 28 | ### Deploy |
42 | 29 |
|
43 | 30 | 1. Fork the repository |
44 | 31 | 2. Create a new **Blueprint** on Render |
45 | 32 | 3. Connect your forked repo |
46 | | -4. Add `OPENAI_API_KEY` as a secret |
47 | | -5. Deploy |
| 33 | +4. In the Render dashboard, set `LANGGRAPH_DEPLOYMENT_URL` to your agent's URL |
| 34 | +5. Set `LANGSMITH_API_KEY` to your LangSmith API key |
| 35 | +6. Deploy |
| 36 | + |
| 37 | +Render reads `render.yaml` and creates the frontend service. |
| 38 | + |
| 39 | +## LangGraph Platform (LangSmith Cloud) |
| 40 | + |
| 41 | +The agent can be deployed as a managed service on [LangGraph Platform](https://docs.langchain.com/langsmith/cli#deploy) using the `langgraph deploy` CLI. The platform provides built-in Postgres persistence, tracing, streaming, and auto-scaling. |
| 42 | + |
| 43 | +### Prerequisites |
| 44 | + |
| 45 | +- A [LangSmith](https://smith.langchain.com) account (Plus plan or higher) |
| 46 | +- LangSmith API key (`LANGSMITH_API_KEY`) |
| 47 | +- The `langgraph` CLI: `pip install langgraph-cli` |
| 48 | +- Docker installed and running (Apple Silicon users need Docker Buildx) |
| 49 | + |
| 50 | +### Deploy |
| 51 | + |
| 52 | +```bash |
| 53 | +cd apps/agent |
| 54 | +langgraph deploy --name open-generative-ui-agent |
| 55 | +``` |
| 56 | + |
| 57 | +The CLI reads `langgraph.json`, builds a Docker image, and pushes it to the managed registry. Auth is via `LANGSMITH_API_KEY` env var or `--api-key` flag. |
| 58 | + |
| 59 | +After deployment, configure environment variables in the [LangSmith dashboard](https://smith.langchain.com): |
| 60 | + |
| 61 | +| Variable | Required | Notes | |
| 62 | +|----------|----------|-------| |
| 63 | +| `OPENAI_API_KEY` | Yes | Your OpenAI API key | |
| 64 | +| `LANGGRAPH_CLOUD` | Yes | Set to `true` — skips local checkpointer in favor of platform-managed Postgres | |
| 65 | +| `LLM_MODEL` | No | Defaults to `gpt-5.4-2026-03-05` | |
| 66 | +| `LANGCHAIN_TRACING_V2` | No | Set to `true` for built-in tracing | |
| 67 | +| `LANGCHAIN_PROJECT` | No | Project name for organizing traces | |
| 68 | + |
| 69 | +Note the deployment URL (e.g., `https://<id>.default.us.langgraph.app`). |
| 70 | + |
| 71 | +### Other CLI Commands |
| 72 | + |
| 73 | +| Command | Description | |
| 74 | +|---------|-------------| |
| 75 | +| `langgraph deploy list` | List all deployments | |
| 76 | +| `langgraph deploy logs` | Fetch runtime or build logs | |
| 77 | +| `langgraph deploy revisions list <id>` | Show revision history | |
| 78 | +| `langgraph dev` | Local dev server (no Docker) | |
| 79 | + |
| 80 | +## Split Deployment: Render + LangGraph Platform |
| 81 | + |
| 82 | +The recommended production setup runs the frontend on Render and the agent on LangGraph Platform. This gives you managed scaling and Postgres persistence for the agent, with familiar Node hosting for the frontend. |
| 83 | + |
| 84 | +### Architecture |
| 85 | + |
| 86 | +``` |
| 87 | +┌──────────────────────┐ HTTPS + x-api-key ┌─────────────────────────┐ |
| 88 | +│ Render (Frontend) │ ──────────────────────────▶ │ LangGraph Platform │ |
| 89 | +│ Next.js on Node 22 │ │ (LangSmith Cloud) │ |
| 90 | +│ /api/copilotkit │ ◀────────────────────────── │ Python agent │ |
| 91 | +└──────────────────────┘ SSE / streaming └─────────────────────────┘ |
| 92 | +``` |
| 93 | + |
| 94 | +### Step-by-Step |
| 95 | + |
| 96 | +1. **Deploy the agent** on LangGraph Platform: |
| 97 | + ```bash |
| 98 | + cd apps/agent |
| 99 | + langgraph deploy --name open-generative-ui-agent |
| 100 | + ``` |
| 101 | + Then set `OPENAI_API_KEY` and `LANGGRAPH_CLOUD=true` in the LangSmith dashboard. |
| 102 | + Note the deployment URL. |
| 103 | + |
| 104 | +2. **Deploy the frontend** on Render following the [Render](#render-frontend-only) section. |
| 105 | + In the Render dashboard, set: |
| 106 | + - `LANGGRAPH_DEPLOYMENT_URL` = your LangGraph Platform URL (e.g., `https://<id>.default.us.langgraph.app`) |
| 107 | + - `LANGSMITH_API_KEY` = your LangSmith API key (starts with `lsv2_`) |
| 108 | + |
| 109 | +3. **Verify** by hitting `GET /api/health` on the frontend and sending a test message through the UI. |
| 110 | + |
| 111 | +### How Auth Works |
48 | 112 |
|
49 | | -Render reads `render.yaml` and creates both services. The frontend automatically gets the agent URL via service discovery. |
| 113 | +The frontend's API route (`/api/copilotkit`) reads `LANGSMITH_API_KEY` from the environment and sends it as an `x-api-key` header on every request to the agent. LangGraph Platform validates this key. The API key is never exposed to the browser — it stays server-side in the Next.js API route. |
| 114 | + |
| 115 | +### Self-Hosted vs. LangGraph Platform |
| 116 | + |
| 117 | +| Concern | Self-Hosted | LangGraph Platform | |
| 118 | +|---------|-------------|--------------------| |
| 119 | +| Checkpointer | BoundedMemorySaver (in-memory) | Managed Postgres (automatic) | |
| 120 | +| HTTP serving | FastAPI + uvicorn | Platform-managed | |
| 121 | +| Health checks | `/health` endpoint | Platform-managed | |
| 122 | +| Tracing | Optional (LANGSMITH_API_KEY) | Built-in | |
| 123 | +| Scaling | Manual / render.yaml | Platform-managed | |
| 124 | +| Auth | None (private network) | LANGSMITH_API_KEY required | |
50 | 125 |
|
51 | 126 | ## General Deployment |
52 | 127 |
|
@@ -88,55 +163,6 @@ Requirements: |
88 | 163 | | Agent | `GET /health` | `{"status": "ok"}` | |
89 | 164 | | Frontend | `GET /api/health` | 200 OK | |
90 | 165 |
|
91 | | -## LangGraph Platform (LangSmith Cloud) |
92 | | - |
93 | | -The agent can be deployed as a managed service on [LangGraph Platform](https://docs.smith.langchain.com/langgraph-platform/deployment) for built-in tracing, streaming, and persistence. |
94 | | - |
95 | | -### Prerequisites |
96 | | - |
97 | | -- A [LangSmith](https://smith.langchain.com) account (Plus plan or higher) |
98 | | -- LangSmith API key |
99 | | -- The `langgraph` CLI: `pip install langgraph-cli` |
100 | | - |
101 | | -### Deploy |
102 | | - |
103 | | -1. Connect your GitHub repo to LangSmith via **Deployments > + New Deployment** |
104 | | -2. Set the root directory to `apps/agent` |
105 | | -3. Configure environment variables in the LangSmith dashboard: |
106 | | - |
107 | | -| Variable | Required | Notes | |
108 | | -|----------|----------|-------| |
109 | | -| `OPENAI_API_KEY` | Yes | Your OpenAI API key | |
110 | | -| `LANGGRAPH_CLOUD` | Yes | Set to `true` — enables platform-native checkpointer | |
111 | | -| `LLM_MODEL` | No | Defaults to `gpt-5.4-2026-03-05` | |
112 | | -| `LANGCHAIN_TRACING_V2` | No | Set to `true` for built-in tracing | |
113 | | -| `LANGCHAIN_PROJECT` | No | Project name for organizing traces | |
114 | | - |
115 | | -4. Note the deployment URL (e.g., `https://<id>.default.us.langgraph.app`) |
116 | | - |
117 | | -### Connect the Frontend |
118 | | - |
119 | | -Set `LANGGRAPH_DEPLOYMENT_URL` and `LANGSMITH_API_KEY` on your frontend: |
120 | | - |
121 | | -```bash |
122 | | -LANGGRAPH_DEPLOYMENT_URL=https://<id>.default.us.langgraph.app \ |
123 | | -LANGSMITH_API_KEY=lsv2_... \ |
124 | | -pnpm --filter @repo/app start |
125 | | -``` |
126 | | - |
127 | | -The frontend automatically sends `x-api-key` headers when `LANGSMITH_API_KEY` is set. |
128 | | - |
129 | | -### Self-Hosted vs. LangGraph Platform |
130 | | - |
131 | | -| Concern | Self-Hosted (Render) | LangGraph Platform | |
132 | | -|---------|---------------------|--------------------| |
133 | | -| Checkpointer | BoundedMemorySaver (in-memory) | Managed Postgres (automatic) | |
134 | | -| HTTP serving | FastAPI + uvicorn | Platform-managed | |
135 | | -| Health checks | `/health` endpoint | Platform-managed | |
136 | | -| Tracing | Optional (LANGSMITH_API_KEY) | Built-in | |
137 | | -| Scaling | render.yaml config | Platform-managed | |
138 | | -| Auth | None (private network) | LANGSMITH_API_KEY required | |
139 | | - |
140 | 166 | ## Docker |
141 | 167 |
|
142 | 168 | A Dockerfile for the frontend is available at `docker/Dockerfile.app`. The agent can be containerized with a standard Python Dockerfile using `uv`. |
|
0 commit comments