langchain-multi-agent-framework

Production-ready multi-agent orchestration using LangChain, LangGraph, and CrewAI patterns

Python 3.11+ AI/ML v0.1.0

Executive Summary

A multi-agent orchestration framework built on LangChain, LangGraph, and CrewAI patterns. It provides a supervisor-driven StateGraph that routes tasks to specialized agents -- Planner, Researcher, Coder, and Reviewer -- which collaborate through shared memory. The framework supports multiple LLM providers (OpenAI, Anthropic, Azure OpenAI), multiple vector store backends (ChromaDB, Pinecone, Weaviate), and includes tools for web search (Tavily, SerpAPI, DuckDuckGo), sandboxed code execution, and RAG retrieval. Designed for research workflows, development teams, and knowledge-base-backed customer support.

Overview

Python
Language
AI/ML
Category
13
Core Dependencies
10
Source Modules
4
Specialized Agents
MIT
License

Architecture Diagram

User Request
v
Supervisor Node
Routes tasks to agents
v
Planner Agent
Task decomposition
Researcher Agent
Web search + RAG
Coder Agent
Code generation
Reviewer Agent
Quality assurance
LangGraph StateGraph Orchestrator (cyclic routing back to Supervisor)
v
Web Search
Tavily / SerpAPI / DDG
Code Executor
Sandboxed Python
Vector Store
ChromaDB / Pinecone
Shared Memory
Thread-Safe State
Agent Tools + Shared State
v
Final Output

Component Breakdown

ComponentFilePurposeKey Dependencies
Orchestratorsrc/orchestrator.pyLangGraph StateGraph with supervisor routinglanggraph, langchain
Configurationsrc/config.pyDataclass-based config for LLM, vector store, agentspydantic
Base Agentsrc/agents/base_agent.pyAbstract base with LLM factory (OpenAI/Anthropic/Azure)langchain-openai, langchain-anthropic
Plannersrc/agents/planner.pyTask decomposition into sub-tasksBase agent
Researchersrc/agents/researcher.pyWeb search and RAG-based researchWeb search tool, vector store
Codersrc/agents/coder.pyCode generation with sandboxed executionCode executor tool
Reviewersrc/agents/reviewer.pyCode review and quality assuranceBase agent
Web Searchsrc/tools/web_search.pyMulti-provider web search (Tavily, SerpAPI, DDG)tavily-python, duckduckgo-search
Code Executorsrc/tools/code_executor.pySandboxed Python code executionsubprocess (stdlib)
Vector Storesrc/tools/vector_store.pyRAG retrieval from ChromaDB/Pinecone/Weaviatelangchain-chroma, chromadb
Shared Memorysrc/memory/conversation.pyThread-safe shared memory for inter-agent communicationthreading (stdlib)

Data Flow / Request Flow

  1. User Request -- User submits a task description to orchestrator.run().
  2. Supervisor Routing -- The supervisor node analyzes the request and routes to the appropriate agent (plan, research, code, review, or done).
  3. Agent Execution -- Each agent processes its assigned sub-task using its tools and LLM. Results are written to shared memory.
  4. Cyclic Routing -- After each agent completes, control returns to the supervisor, which decides the next step or terminates.
  5. Tool Usage -- Researcher uses web search and vector store retrieval. Coder uses sandboxed code execution. All agents read/write shared memory.
  6. Recursion Limit -- The graph enforces a configurable recursion limit (default 50) to prevent infinite loops.
  7. Final Output -- When the supervisor routes to "done", the final state containing all agent results is returned.

Security Controls

ControlImplementation
API Key ManagementEnvironment variables via python-dotenv; supports OpenAI, Anthropic, and Azure keys
Sandboxed ExecutionCode executor runs Python in isolated subprocess with resource limits
Recursion LimitConfigurable graph recursion limit prevents runaway agent loops
Agent TimeoutPer-agent execution timeout (default 300s)
Thread SafetyShared memory uses thread-safe locking for concurrent agent access
Input ValidationPydantic v2 models for all configuration structures

Industry Adaptation

Healthcare

Clinical research teams with medical literature RAG. HIPAA-compliant data retrieval with filtered vector stores. Medical coding assistance agents.

Finance

Financial analysis research with regulatory document RAG. Automated due diligence report generation. Compliance review agents for SEC filings.

Government

Policy analysis and legislative research. Classified-environment compatible with air-gapped LLM providers. Multi-agency knowledge base aggregation.

Retail

Product catalog research and competitive analysis. Customer support with product knowledge RAG. Automated content generation for marketing.

SaaS

Developer documentation assistance. Automated code review and quality gates. Technical support with codebase-aware RAG retrieval.

Production Readiness Checklist

Configuration / Environment Variables

VariableRequiredDefaultDescription
OPENAI_API_KEYYes*--OpenAI API key (*or use another provider)
ANTHROPIC_API_KEYNo--Anthropic API key (alternative provider)
AZURE_OPENAI_API_KEYNo--Azure OpenAI API key (alternative provider)
LLM_PROVIDERNoopenaiLLM provider: openai, anthropic, azure_openai
LLM_MODELNogpt-4oModel identifier
LLM_TEMPERATURENo0.0Sampling temperature
LLM_MAX_TOKENSNo4096Maximum output tokens
TAVILY_API_KEYNo--Tavily search API key (preferred search)
SERPAPI_API_KEYNo--SerpAPI key (fallback search)

Deployment

Local Development

git clone https://github.com/your-org/langchain-multi-agent-framework.git
cd langchain-multi-agent-framework
python -m venv .venv
source .venv/bin/activate
pip install -e ".[search,dev]"
cp .env.example .env
python -m examples.research_team

Production

# Install with preferred vector store:
pip install -e ".[search]"       # ChromaDB (default)
pip install -e ".[search,pinecone]"  # Pinecone
pip install -e ".[search,weaviate]"  # Weaviate

# Run as async service:
# await orchestrator.arun("task")

Links

Repositorygithub.com/your-org/langchain-multi-agent-framework
READMEREADME.md
ChangelogCHANGELOG.md
LicenseMIT License