A multi-agent research system using LangGraph for automated research and report generation
Build, test, and deploy a multi-agent AI system using LangGraph, Docker, AWS Lambda, and CircleCI. The system uses a research-driven AI workflow where different agents,such as fact-checking, summarization, and search agents, work together seamlessly. This application is packaged into a Docker container, deployed to AWS Lambda, and the entire pipeline is run using CircleCI.
The project has been developed as part of the following blog
- Multi-agent architecture using LangGraph
- Automated web search using Serper API
- Fact-checking and verification
- Report generation with structured summaries
- AWS Lambda deployment support
- Configurable confidence scores and retry mechanisms
- Python 3.12
- AWS CLI (for Lambda deployment)
- Serper API key
- OpenAI API key
- AWS Credentials (for Lambda deployment)
-
Clone the repository:
git clone https://github.com/benitomartin/multiagent-langgraph-circleci.git cd multiagent-langgraph-circleci
-
Create a virtual environment:
uv venv
-
Activate the virtual environment:
-
On Windows:
.venv\Scripts\activate
-
On Unix or MacOS:
source .venv/bin/activate
-
-
Install the required packages:
uv sync --all-extras
-
Create a
.env
file in the root directory:# API Keys SERPER_API_KEY=your_serper_key_here OPENAI_API_KEY=your_openai_key_here # AWS Configuration AWS_REGION=your_aws_region AWS_ACCESS_KEY_ID=your_aws_access_key AWS_SECRET_ACCESS_KEY=your_aws_secret_key AWS_ACCOUNT_ID=your_aws_account_id # Repository and Image Configuration REPOSITORY_NAME=langgraph-ecr-docker-repo IMAGE_NAME=langgraph-lambda-image # Lambda Configuration LAMBDA_FUNCTION_NAME=langgraph-lambda-function ROLE_NAME=lambda-bedrock-role ROLE_POLICY_NAME=LambdaBedrockPolicy
To obtain the required API keys:
- Serper API Key: Sign up at Serper.dev
- OpenAI API Key: Sign up at OpenAI Platform
- AWS Credentials: Create through AWS IAM Console
The following parameters can be adjusted in config/settings.py
:
CONFIDENCE_THRESHOLD
: Threshold for confidence in fact-checkingMAX_RETRIES
: Maximum number of retries for the search agentADD_MAX_RESULTS
: Number of search results to add in each retryFACT_CHECK_MODEL
: Model used for fact-checking (default: "gpt-4-mini")SUMMARIZATION_MODEL
: Model used for summarization (default: "anthropic.claude-3-haiku")
To run the research graph locally:
uv run src/graph/research_graph.py \
--query "What are the benefits of using AWS Cloud Services?" \
--confidence-threshold 0.85 \
--max-retries 3 \
--add-max-results 2
Build and deploy the Docker image with the lambda function:
chmod +x build_deploy.sh
./build_deploy.sh
To invoke the deployed Lambda function add your region and run the following command:
aws lambda invoke \
--function-name langgraph-lambda-function \
--payload '{"query": "What are the benefits of using CircleCI?"}' \
--region <your_region> \
--cli-binary-format raw-in-base64-out \
response.json && \
cat response.json | jq
This project is licensed under the MIT License - see the LICENSE file for details.