A practical guide to creating modular, reusable agent architectures that can be shared across projects. LangGraph is a robust framework for building stateful, multi-agent applications using Large Language Models (LLMs). Think of it as a way to create conversation flows where different AI agents can work together, each with their own specialized role.
The LangGraph Philosophy
At its core, LangGraph treats AI applications as stateful graphs where nodes represent agents or functions, and edges represent the flow of data and control. This might sound abstract, but consider it a conversation between specialized experts, where each builds upon the previous one’s insights.
Imagine you’re building a research assistant. Instead of one massive AI trying to do everything, you could have a research agent that gathers information, a formatter agent that structures it nicely, and a validator agent that checks for accuracy. Each agent does what it’s best at, and they pass their work along to the next agent in the pipeline.
This graph-based approach makes it incredibly easy to model complex workflows. You can break down complex tasks into independent components that work together. The state flows naturally between agents, maintaining context and data across multiple interactions. And because it’s a graph, you can build sophisticated patterns like feedback loops, parallel processing, and conditional branching.
Note: I recently started using Groq to execute my LLM calls, and I love it. This is one place to pick from multiple LLM offerings, and it is speedy. In the past, I used to run Ollama locally, but it was too slow for my Mac (M3 Pro with 18 GB). Groq is super fast!
Core LangGraph Concepts
🏗️ State Management
The heart of any LangGraph application is its state, which is a shared data structure that flows between nodes. Think of it as a shared workspace where each agent can read from and write to different fields. It’s like having a whiteboard that everyone in your team can see and contribute to.
Here’s how you define state in LangGraph:
1 2 3 4 5 |
class MyState(TypedDict): messages: Annotated[List, add_messages] # Conversation history user_input: str # Current user input processed_data: Dict[str, Any] # Agent outputs metadata: Dict[str, Any] # Additional context |
The magic happens with the Annotated
type and add_messages
. This automatically handles message accumulation, so you don’t have to manually manage conversation history. Each agent can add their messages to the conversation, and LangGraph keeps track of everything for you.
🔄 Workflow Orchestration
Building a LangGraph workflow is like drawing a flowchart, but instead of boxes and arrows on paper, you’re defining the flow programmatically. You start by creating a StateGraph
. Next, add nodes (representing your agents) and edges (connecting them).
1 2 3 4 5 6 7 8 9 10 11 12 13 |
workflow = StateGraph(MyState) # Add nodes (agents/functions) workflow.add_node("agent1", agent1_function) workflow.add_node("agent2", agent2_function) # Define flow workflow.add_edge(START, "agent1") workflow.add_edge("agent1", "agent2") workflow.add_edge("agent2", END) # Compile the graph app = workflow.compile() |
This creates a simple linear flow: START → agent1 → agent2 → END. But the real power comes when you start adding more sophisticated patterns like conditional branching, parallel processing, and feedback loops.
🛠️ Tool Integration
One of the most powerful features of LangGraph is how easily agents can use external tools. Whether it’s searching the web, calling APIs, or accessing databases, agents can seamlessly integrate with any external service.
The LangChain tool system makes this incredibly straightforward:
1 2 3 4 5 6 7 8 9 |
from langchain_core.tools import tool @tool def web_search(query: str) -> str: """Search the web for information""" return search_results # Bind tools to LLM llm_with_tools = llm.bind_tools([web_search]) |
Just decorate your function with @tool
, and LangGraph automatically makes it available to your agents. The LLM can then decide when and how to use these tools based on the context of the conversation.
💾 Memory & Persistence
Real-world applications need to remember conversations and maintain context across sessions. LangGraph’s built-in checkpointing makes this trivial – you can save and resume conversations without any additional complexity.
1 2 3 4 5 6 7 8 9 |
from langgraph.checkpoint.memory import MemorySaver # Add memory to your graph app = workflow.compile(checkpointer=MemorySaver()) # Save and resume conversations config = {"configurable": {"thread_id": "user_123"}} result = app.invoke(initial_state, config=config) |
This is incredibly powerful for building chatbots, research assistants, or any application where you want to maintain context across multiple interactions. Each conversation gets a unique thread ID, and LangGraph handles all the persistence for you.
🔀 Advanced Flow Control
While linear workflows are great for simple tasks, the magic happens when you start building more sophisticated flow patterns. LangGraph supports everything from conditional branching to parallel processing and feedback loops.
Conditional Edges let you route based on the current state. For example, you might want to validate data only if it looks suspicious:
1 2 3 4 5 6 7 |
def should_continue(state): return "continue" if state["needs_more_info"] else "end" workflow.add_conditional_edges("agent", should_continue, { "continue": "next_agent", "end": END }) |
Parallel Processing is ideal for handling multiple independent tasks. Maybe you want to research a topic and analyze market data simultaneously:
1 2 3 |
workflow.add_edge(START, "agent1") workflow.add_edge(START, "agent2") workflow.add_edge(["agent1", "agent2"], "synthesizer") |
Feedback Loops create iterative processes where agents can refine their work. A typical pattern is having a validator that can send work back for improvement:
1 2 |
workflow.add_edge("agent", "validator") workflow.add_edge("validator", "agent") # Loop back if validation fails |
These patterns let you build incredibly sophisticated workflows that can handle complex, real-world scenarios.
The Multi-Agent Research System
Our system demonstrates how to build a research pipeline where three specialized agents work together:
- 🔍 Research Agent – Gathers information using web search and fact-checking
- 📝 Formatter Agent – Structures the raw research into a professional report
- ✅ Validator Agent – Reviews the content for accuracy and flags potential issues
State Flow
1 2 3 4 5 6 7 |
class ResearchState(TypedDict): messages: Annotated[List, add_messages] topic: str raw_research: str formatted_content: Dict[str, str] validation_results: Dict[str, Any] final_output: str sources: List[str] validation_issues: List[str] |
The state acts like a shared workspace where each agent can read from and write to different fields. The research agent populates raw_research
, the formatter uses that to create formatted_content
, and the validator adds validation_results
.
Building the Workflow Graph
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
def create_research_graph(): workflow = StateGraph(ResearchState) # Add agent nodes workflow.add_node("research", research_agent_wrapper) workflow.add_node("formatter", formatter_agent_wrapper) workflow.add_node("validator", validator_agent_wrapper) workflow.add_node("finalizer", finalizer_wrapper) # Define the flow workflow.add_edge(START, "research") workflow.add_edge("research", "formatter") workflow.add_edge("formatter", "validator") workflow.add_edge("validator", "finalizer") workflow.add_edge("finalizer", END) return workflow.compile(checkpointer=MemorySaver()) |
This creates a linear pipeline where each agent processes the state and passes it to the next agent. The beauty is that you can easily modify this flow – add parallel processing, conditional branches, or feedback loops.
Observability with LangSmith
LangSmith is LangChain’s observability platform that provides deep insights into your LangGraph applications. It’s like having a debugging and monitoring dashboard for your AI workflows.
Why LangSmith Matters
Building multi-agent systems is exciting, but debugging them can be a nightmare. When something goes wrong, you need to know exactly what happened, where it happened, and why. That’s where LangSmith comes in.
Imagine you’ve built a research system with three agents, and suddenly it’s producing weird results. Without proper observability, you’re left guessing: Did the research agent fail to find good sources? Did the formatter misunderstand the data? Or did the validator miss something important?
LangSmith gives you complete visibility into your system. You can see exactly what each agent is doing, how long things take, where errors occur, and how much you’re spending on API calls. It’s like having a dashboard that shows you the inner workings of your AI system in real-time.
Setting Up LangSmith
1. Environment Configuration
1 2 3 4 5 |
# Add this to your .env file LANGCHAIN_API_KEY=your_langsmith_api_key LANGCHAIN_TRACING_V2=true LANGCHAIN_PROJECT=research-assistant LANGCHAIN_ENDPOINT=https://api.smith.langchain.com |
2. Basic Integration
1 2 3 4 5 6 |
import os from langchain_core.tracers import LangChainTracer # Your existing LangGraph code works automatically! workflow = create_research_graph() result = workflow.invoke(initial_state) |
3. Advanced Tracing
1 2 3 4 5 6 7 8 9 10 11 12 13 |
from langchain_core.tracers import LangChainTracer from langchain_core.callbacks import CallbackManager # Create a tracer tracer = LangChainTracer() # Add to your workflow config = { "configurable": {"thread_id": "user_123"}, "callbacks": [tracer] } result = workflow.invoke(initial_state, config=config) |
What You Get with LangSmith
Once you have LangSmith set up, you get a comprehensive view of your multi-agent system that’s both powerful and easy to understand.
Execution Traces show you the complete flow of your system in a beautiful, interactive timeline. You can see exactly when each agent runs, what data flows between them, and how they interact with external tools.
Performance Metrics help you understand how your system is performing. You’ll see how long each agent takes to execute, how many tokens you’re consuming (and spending), and where the bottlenecks are. This is crucial for optimization – you might discover that one agent is taking 10 seconds while others finish in milliseconds.
Debugging Tools are a must-have when things go wrong. You can pause execution at any point, inspect the state, and see exactly what data is flowing between agents. When an error occurs, you get detailed stack traces with complete context, making it much easier to identify and fix issues.
Analytics Dashboard gives you insights into usage patterns and trends over time. You can see which agents are used most frequently, track performance improvements, identify expensive operations, and monitor overall system reliability. This data is invaluable for making informed decisions about system architecture and optimization.
LangSmith in Our Research System
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
def create_research_graph_with_tracing(): """Create research workflow with LangSmith integration""" # Set up tracing os.environ["LANGCHAIN_TRACING_V2"] = "true" os.environ["LANGCHAIN_PROJECT"] = "multi-agent-research" # Create workflow (same as before) workflow = StateGraph(ResearchState) workflow.add_node("research", research_agent_wrapper) workflow.add_node("formatter", formatter_agent_wrapper) workflow.add_node("validator", validator_agent_wrapper) workflow.add_node("finalizer", finalizer_wrapper) # Define flow workflow.add_edge(START, "research") workflow.add_edge("research", "formatter") workflow.add_edge("formatter", "validator") workflow.add_edge("validator", "finalizer") workflow.add_edge("finalizer", END) return workflow.compile(checkpointer=MemorySaver()) # Run with tracing def run_research_with_observability(topic: str): workflow = create_research_graph_with_tracing() # LangSmith automatically tracks this execution result = workflow.invoke({ "messages": [HumanMessage(content=f"Research topic: {topic}")], "topic": topic, # ... other state fields }) return result |
Sample view from the smith.langchain.com site…
LangSmith Dashboard Features
Trace View
- Timeline: Visual representation of agent execution
- State Snapshots: See the state at each step
- Tool Calls: Detailed view of external API calls
- LLM Interactions: Input/output for each model
Analytics
- Performance Metrics: Latency, throughput, error rates
- Cost Analysis: Token usage and spending by model
- Usage Patterns: Most common workflows and paths
- Trend Analysis: Performance over time
Debugging
- Step Debugging: Pause execution at any point
- State Inspection: Examine data flow between agents
- Error Analysis: Detailed error information with context
- Replay: Re-run specific parts of the workflow
Making It Reusable
The challenge with multi-agent systems is that they’re often tightly coupled to particular use cases. A simple solution to that is to extract the core agent logic into reusable modules.
The Architecture
1 2 3 4 5 6 |
langgraph-researcher/ ├── .env # DO NOT CHECKIN. Has your Groq, Langchain and other API keys ├── research.py # Main workflow orchestration. Run from command line ├── agent_functions.py # Reusable agent logic ├── tools.py # Reusable tools |
Agent Functions: The Heart of Reusability
1 2 3 4 5 6 7 8 |
def research_agent(state: Dict[str, Any], llm: BaseLanguageModel, tools: List[BaseTool]) -> Dict[str, Any]: """Conducts comprehensive research on the topic""" # Agent logic here - works with any LLM and tools return { "raw_research": research_content, "sources": sources, "messages": state["messages"] + [response] } |
By accepting llm
and tools
as parameters, the same function can work with different models and capabilities. This is the dependency injection pattern in action.
Wrapper Pattern for Project-Specific Integration
1 2 3 |
def research_agent_wrapper(state: ResearchState) -> ResearchState: """Wrapper that injects project-specific dependencies""" return research_agent(state, research_llm, tools) |
The wrapper handles the impedance mismatch between the generic agent function and your specific state structure and dependencies.
The modular approach here isn’t just about clean code, but it also delivers tangible benefits that make your life as a developer much easier.
Rapid Prototyping becomes incredibly fast when you have reusable agent functions. Want to test a new research workflow? Just import the functions and create a new graph. You can have a working prototype in minutes instead of hours:
1 2 3 4 5 6 7 |
from agent_functions import research_agent, formatter_agent from tools import get_tools # New workflow in minutes workflow = StateGraph(MyState) workflow.add_node("research", lambda state: research_agent(state, my_llm, my_tools)) workflow.add_node("formatter", lambda state: formatter_agent(state, my_llm, my_tools)) |
Easy Customization means you can adapt the same agent logic to different projects without rewriting everything. Maybe one project needs OpenAI for research and Groq for formatting, while another needs the opposite. With our modular approach, it’s just a matter of passing different LLMs:
1 2 3 4 5 |
# Project A: Use OpenAI for research, Groq for formatting research_agent(state, openai_llm, tools) # Project B: Use Groq for research, Claude for formatting research_agent(state, groq_llm, tools) |
Code Sharing across teams becomes seamless. Instead of duplicating the code between projects, you can now reuse the same agent module across projects. When you improve an agent function, all projects benefit automatically.
Running the Research Assistant
Installation
1 |
uv sync |
Environment Setup
1 2 3 4 5 6 7 8 9 10 11 |
# .env file # LLM API Keys OPENAI_API_KEY=your_openai_key GROQ_API_KEY=your_groq_key SERPER_API_KEY=your_serper_key # LangSmith Observability LANGCHAIN_API_KEY=your_langsmith_api_key LANGCHAIN_TRACING_V2=true LANGCHAIN_PROJECT=my-research-system LANGCHAIN_ENDPOINT=https://api.smith.langchain.com |
LangSmith Setup
- Sign up at smith.langchain.com
- Generate an API key from the settings page
- Set environment variables as shown above
- Start tracing – your LangGraph workflows will automatically be tracked!
Basic Usage
To run this from the command line, follow these steps. Ensure you have the ‘uv’ command-line tool set up. I am starting to find that ‘uv’ is more productive, particularly in how it automatically sets up virtual environments.
1 |
uv run research.py |
Running the above, it will ask for a topic to research, so enter that, and off you go. There is a default topic if you just hit enter. The final response will be printed to the console and also saved to a markdown file.
I hope this blog will help you get started with LangGraph. Once you get the basic hang of it, it is quick. I would also suggest using Vibe coding tools to help you build simple examples (but after you have reviewed the basics of how this framework works).