LangChain’s cover photo

About us

At LangChain, our mission is to make intelligent agents ubiquitous. We build the foundation for agent engineering in the real world, helping developers move from prototypes to production-ready AI agents that teams can rely on. What began as widely adopted open-source tools has grown into a platform for building, evaluating, deploying, and operating agents at scale. LangChain provides the agent engineering platform and open source frameworks developers need to ship reliable agents fast. LangSmith offers observability, evaluation, and deployment for rapid iteration. Our open source frameworks, LangGraph, LangChain, and Deep Agents, help developers build agents with speed and granular control. LangSmith is trusted by leading AI teams at Zip, Vanta, Klarna, Workday, Linkedin, Cloudflare, and more.

Website
langchain.com
Industry
Technology, Information and Internet
Company size
51-200 employees
Type
Privately Held

Employees at LangChain

Updates

  • LangChain reposted this

    Day 3 of my harness engineering series: context management with middleware For long-running agents, you need periodic conversation history compaction so you don't overflow the context window. Summarization needs to retain the most important information though, so your agent stays on track after each compression. LangChain's built-in SummarizationMiddleware handles this automatically when your set threshold is reached, so the model never suffers from context overflow. Context engineering is a runtime problem, and middleware makes it easy to manage your context at every point in the agent loop.

    • No alternative text description for this image
  • View organization page for LangChain

    501,759 followers

    📣 Announcing our partnership with MongoDB: The AI Stack that runs on the database you already trust We've partnered with MongoDB so teams can go from prototype to production leveraging existing Atlas deployment, without standing up parallel infrastructure. Atlas Vector Search plugs directly into langchain as a drop-in retriever for semantic, hybrid, and GraphRAG queries. The MongoDB Checkpointer for LangSmith Deployment persists agent state (multi-turn memory, human-in-the-loop workflows, audit trails) directly in Atlas. Text-to-MQL lets agents query operational data in plain English. And LangSmith traces everything end to end, from retrieval calls to routing decisions to state transitions. ➡️ Read the announcement: https://lnkd.in/euhrUfN6 ➡️ Check out the MongoDB docs: https://lnkd.in/efnMyncJ ➡️ Check out our docs: https://lnkd.in/eewuc9Ww

    • No alternative text description for this image
  • LangChain reposted this

    Day 2 of my harness engineering series: dynamic configuration Your agent's model, tools, and system prompt don't have to be fixed at creation time: middleware lets you reshape them at every step based on user or conversation context. One example: LangChain's built-in LLMToolSelectorMiddleware runs a fast secondary model to filter your tool registry before the main call. Only the relevant tools make it into context. This reduces context bloat and can improve model performance.

    • No alternative text description for this image
  • View organization page for LangChain

    501,759 followers

    New conceptual guide: 🔄 The agent improvement loop starts with a trace Tracing is the foundational primitive for improving agents. A trace gives you the full behavioral record of what an agent actually did. From there, teams can enrich traces with evals and human feedback, turn recurring failures into test cases, validate fixes before shipping, and repeat. This guide breaks down the full improvement loop and why reliable agents are built through trace-centered iteration, not one-off debugging. Read more → https://lnkd.in/eQ-Rdc5R

    • No alternative text description for this image
  • New LangChain Academy Course Launch: Monitoring Production Agents Shipping agents to production is hard. Unlike traditional software, agents are non-deterministic. Users can say anything, and the same input can produce different outputs. You can’t rely on pre-launch testing alone. To build great agents, you need to understand how they behave in production by analyzing conversations, responses, and execution steps. The goal of this course is to teach you how to monitor and improve agents in production. You’ll learn how to do this with LangSmith, our platform for agent observability and evals. We’ll dive into how to track costs, uncover trends with trace analysis, monitor quality and latency, and detect issues like prompt injection and PII leakage. By the end, you’ll know how to confidently understand, improve, and safeguard your agents in production. Enroll for free ➡️ https://lnkd.in/gsRtiBnD

  • LangChain reposted this

    New series this week: how to use middleware to customize your agent harness! Case 1: business logic and compliance Many agents need a combination of agentic and deterministic steps. For example, your application might require PII redaction to run on any inputs before they're sent to a model. You can use Langchain's builtin PIIMiddleware to mask/redact/hash/block PII, or build your own custom middleware for other compliance needs.

    • No alternative text description for this image
  • The hardest part of debugging an AI agent isn't knowing it failed--it's knowing why. We rebuilt the detail view in LangSmith Experiments from the ground up to answer that question faster. Next time you click and inspect any experiment results, you will find: * Less clutter * Better trace visibility * Clearer evaluator reasoning * Easier comparison workflows. Try it out at smith.langchain.com and let us know what you think!

    • No alternative text description for this image
  • LangChain reposted this

    Just released a new guide on deploying Deep Agents! It walks through decisions like: 1. Multi-tenancy: scoping memory and files for multi-user applications 2. Execution environment: the choice between ephemeral vs persistent (shared across conversations) sandboxes 3. Guardrails: both control flow (model and tool call limits) and context-dependent (PII redaction, content moderation) And more! https://lnkd.in/epDY6cRZ

Affiliated pages

Similar pages

Browse jobs

Funding

LangChain 3 total rounds

Last Round

Series B

US$ 125.0M

See more info on crunchbase