Migrating from a fully self-hosted Render deployment (both agent + frontend) to a split topology: frontend on Render, agent on LangGraph Platform (LangSmith Cloud). The codebase changes are on feat/deploy-langsmith-cloud (commit 0d3a6e6). This plan covers the safe rollout path: branch deploy to dev, verify, merge to main, promote to production.
render.yaml— Agent service removed;LANGGRAPH_DEPLOYMENT_URLswitched fromfromServicetosync: falseapps/agent/main.py—LANGGRAPH_CLOUDdetection fixed (truthy string bug), startup log addedapps/agent/langgraph.json— Already correct:"sample_agent": "./main.py:agent", no.envrefdocs/deployment.md— Rewritten for split deployment withlanggraph deployCLI.env.example— NotedLANGSMITH_API_KEYneeded on frontend
- LangSmith account (Plus plan or higher)
- LangSmith API key (
lsv2_...) — obtain from https://smith.langchain.com/settings -
langgraphCLI installed:pip install langgraph-cli - Docker installed and running (Apple Silicon: ensure Buildx is available)
-
OPENAI_API_KEYready for agent env vars - Render dashboard access for the frontend service
cd apps/agent
langgraph deploy \
--name open-generative-ui-agent-dev \
--deployment-type dev- The CLI builds a Docker image from
langgraph.jsonand pushes to the managed registry - Use
--verboseif the build fails to see Docker output - Apple Silicon: the CLI uses Buildx to cross-compile to
linux/amd64
Navigate to the deployment in the LangSmith dashboard and set:
| Variable | Value |
|---|---|
OPENAI_API_KEY |
Your OpenAI key |
LANGGRAPH_CLOUD |
true |
LLM_MODEL |
gpt-5.4-2026-03-05 (or preferred model) |
LANGCHAIN_TRACING_V2 |
true |
LANGCHAIN_PROJECT |
open-generative-ui-dev |
After the deployment is live, note the URL:
https://<id>.default.us.langgraph.app
Test locally before touching Render:
LANGGRAPH_DEPLOYMENT_URL=https://<id>.default.us.langgraph.app \
LANGSMITH_API_KEY=lsv2_... \
pnpm dev:app- Frontend loads at
http://localhost:3000 - Chat input accepts a message and gets a response
- Agent can add/update/complete todos (state sync works)
- Generative UI renders (widgetRenderer, charts)
- No checkpointer warnings in agent logs (check via
langgraph deploy logs)
- Send a message, note the thread ID
- Refresh the page — conversation state should persist (Postgres-backed)
- This is the key improvement over BoundedMemorySaver (in-memory, lost on restart)
- Open LangSmith dashboard → project
open-generative-ui-dev - Verify traces appear for each agent invocation
- Check for errors or unexpected latency in traces
- Confirm requests without
x-api-keyheader are rejected by the platform - Confirm requests with valid
LANGSMITH_API_KEYsucceed
- Rapid successive messages (rate limiting on frontend if enabled)
- Long-running agent response (streaming works end-to-end)
- Empty/malformed input handling
If you want to test the full split deployment before merging:
Render supports branch-based preview environments. Push the branch:
git push origin feat/deploy-langsmith-cloudIn Render dashboard, create a preview environment or manually set env vars on a staging service:
| Variable | Value |
|---|---|
LANGGRAPH_DEPLOYMENT_URL |
https://<id>.default.us.langgraph.app (dev deployment) |
LANGSMITH_API_KEY |
lsv2_... |
- Frontend health check passes:
GET /api/health→ 200 - Chat works end-to-end through Render → LangGraph Platform
- No CORS or network errors in browser console
gh pr create \
--title "feat: split deployment — frontend on Render, agent on LangGraph Platform" \
--base main \
--head feat/deploy-langsmith-cloudAfter review/approval:
gh pr merge --squash- CI smoke tests pass (build + lint + startup check)
- No regressions on the frontend build
cd apps/agent
langgraph deploy \
--name open-generative-ui-agent \
--deployment-type prodOr update existing deployment:
langgraph deploy \
--name open-generative-ui-agent \
--deployment-id <existing-deployment-id>| Variable | Value |
|---|---|
OPENAI_API_KEY |
Production OpenAI key |
LANGGRAPH_CLOUD |
true |
LLM_MODEL |
gpt-5.4-2026-03-05 |
LANGCHAIN_TRACING_V2 |
true |
LANGCHAIN_PROJECT |
open-generative-ui |
https://<prod-id>.default.us.langgraph.app
In the Render dashboard, update the production frontend env vars:
| Variable | Value |
|---|---|
LANGGRAPH_DEPLOYMENT_URL |
https://<prod-id>.default.us.langgraph.app |
LANGSMITH_API_KEY |
Production LangSmith API key |
Trigger a redeploy (or it will auto-deploy from the main branch merge).
-
GET /api/health→ 200 - Chat works end-to-end
- Todos persist across page refreshes
- Traces appear in LangSmith project
open-generative-ui - No errors in
langgraph deploy logs - Generative UI (widgets, charts) renders correctly
The previous Render agent service is still defined in the main branch prior to merge. If the LangGraph Platform deployment is broken:
- Revert the
render.yamlchange (restore agent service block) - Revert
LANGGRAPH_DEPLOYMENT_URLtofromServicein Render dashboard - Redeploy on Render
The frontend changes are minimal (no code changes to route.ts). Rolling back just means pointing LANGGRAPH_DEPLOYMENT_URL back to the old agent URL in the Render dashboard.
- Delete the dev deployment:
langgraph deploy delete <dev-deployment-id> - Remove old Render agent service if it was kept as fallback
- Confirm LangSmith tracing project is receiving data
- Update any team runbooks or onboarding docs referencing the old Render agent
- Consider enabling
RATE_LIMIT_ENABLED=trueon the Render frontend