LangChain is an end-to-end agent engineering stack that helps teams build, observe, evaluate, and deploy reliable AI agents.
Debug non‑deterministic LLM behavior with end‑to‑end traces and live dashboards for costs, latency, and quality. Drill down into runs and tool calls to find failures fast and improve response quality.
Run online/offline evals, manage datasets, and collect human feedback with an annotation queue. Use evaluators to track accuracy, regressions, and reliability across iterations.
Ship agents with LangSmith Deployment: 1‑click deploy, real‑time streaming of intermediate steps, agent authorization, and horizontally scalable services. Access a 30+ endpoint Assistants API with state and memory, cron scheduling, and auth.
Automatically discover clusters of similar conversations to surface what users want and pinpoint systemic issues. Quickly find and remediate recurring problems across runs.
Choose Cloud, Hybrid, or Self‑Hosted in your VPC. Enable SSO, role‑based access, procurement/Infosec workflows, and SLAs to meet enterprise security and compliance needs.
Base traces: 14‑day retention at $0.50 per 1k; Extended traces: 400‑day retention at $5.00 per 1k; upgrade base to extended at $4.50 per 1k. Plus includes 1 free Dev deployment; after that, usage is billed by node executions and uptime. Startup Plan available with discounted rates (eligible for 1 year, then graduates to Plus). See docs for hourly ingestion and event limits.
Developer is best for personal projects and includes 1 free seat and 5k base traces/month. Plus is for teams needing self‑serve collaboration, 10k base traces/month, and 1 free dev deployment. Enterprise is for advanced security, admin, and hosting options (Cloud, Hybrid, or Self‑Hosted).
Yes. Early‑stage companies can apply for discounted LangSmith pricing with generous free trace allotments. Eligibility lasts 1 year before graduating to the Plus plan.
Developer and Plus seats are billed monthly (prorated if added mid‑month; no credit for removed seats). Trace usage is billed monthly in arrears. Enterprise plans are invoiced annually upfront.
A trace is a single execution of your app (agent, evaluator, or playground) and can contain many events. Base traces have 14‑day retention at $0.50/1k; extended traces have 400‑day retention at $5.00/1k. You can upgrade base to extended for $4.50/1k.
Add a credit card on the Developer or Plus plan to continue sending traces. If you hit performance limits, upgrade plans or contact support@langchain.dev.
Yes. LangSmith supports OTel so you can unify your observability stack across services. Your app can be written in Python or TypeScript—or other languages—when using OTel.
Yes. Observability and Evaluation are decoupled—you can use either (or both). All plans provide access; you only pay for what you use.
Yes, on the Enterprise plan. LangSmith can be delivered to your Kubernetes cluster (AWS, GCP, or Azure) so data remains in your environment.
For smith.langchain.com, data is stored in GCP us‑central‑1. Enterprise deployments can run in your own cloud or VPC to meet residency needs.
No. The SDK sends traces via an asynchronous callback handler to a trace collector, so your application performance is not blocked by LangSmith.
Plus includes 1 free dev‑sized agent deployment. All usage in this dev deployment is free with no cap on node executions.
Use dev‑sized deployments for testing—they don’t support horizontal scaling, backups, or performance optimizations. Use production‑sized deployments for any customer‑facing agent.
Join thousands of developers who are already using LangChain to enhance their workflow and productivity.
Anthropic builds Claude, a family of frontier AI models and tools designed to be safe, reliable, and useful for both individuals and organizations.
GroqCloud is a high-performance AI inference platform built to deliver ultra-low latency, predictable cost, and production-grade reliability for real-world applications.
Claude is a next-generation AI assistant from Anthropic designed to help individuals and teams create, code, research, and analyze faster with strong safety and reliability.
Google AI Studio is a developer-focused platform that streamlines the journey from prompt to production with Gemini and other Google AI models.
OpenAI o1 is a new family of frontier reasoning models designed to spend more time thinking before they respond, enabling stronger performance on complex tasks in science, coding, and math.