Backend Setup
Complete guide to setting up and running the SynapseAI backend
Backend Setup
The SynapseAI backend is a FastAPI application that uses LangGraph for conversational AI orchestration and integrates with multiple external services through MCP (Model Context Protocol) clients. The example configuration uses FlavorFlux as the demo brand.
Prerequisites
- Python 3.11+ (required)
- Docker and Docker Compose (for containerized development)
- uv package manager (recommended) or pip
- Redis (for session storage - can run via Docker)
Installation
Using uv (Recommended)
cd backend
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install SynapseAI dependencies
uv pip install -e .Using pip
cd backend
pip install -e .Configuration
Environment Variables
Create a .env file in the backend directory based on env.example:
cp env.example .envRequired Configuration
LLM Configuration (LiteLLM):
# Choose your LLM provider and model
LITELLM_MODEL=cerebras/llama-3.3-70b
# or: LITELLM_MODEL=openai/gpt-4o-mini
# API Keys (provide at least one)
OPENAI_API_KEY=sk-...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...
CEREBRAS_API_KEY=...Commercetools Configuration:
CT_CLIENT_ID=your-client-id
CT_CLIENT_SECRET=your-client-secret
CT_PROJECT_KEY=your-project-key
CT_AUTH_URL=https://auth.us-central1.gcp.commercetools.com
CT_API_URL=https://api.us-central1.gcp.commercetools.com
CT_SCOPES=manage_products manage_orders view_products ...
CT_MCP_SERVER_URL=http://localhost:3000
# Auth-specific credentials
CT_AUTH_CLIENT=your-auth-client
CT_AUTH_SECRET=your-auth-secret
CT_AUTH_SCOPE=manage_customers:your-project-keyVoucherify Configuration:
VOUCHERIFY_MCP_BASE_URL=http://localhost:3002
VOUCHERIFY_APP_ID=your-app-id
VOUCHERIFY_APP_TOKEN=your-app-token
VOUCHERIFY_API_BASE_URL=https://api.voucherify.ioOpenAI Realtime (Voice Chat):
OPENAI_REALTIME_MODEL=gpt-4o-realtime-preview-2024-10-01
OPENAI_VOICE=alloy # Options: alloy, echo, shimmerCORS Configuration:
CORS_ORIGINS=http://localhost:3000,http://localhost:5173Running the Backend
Option 1: Using Docker Compose (Recommended)
This starts the entire backend stack including MCP servers and Redis:
cd backend
docker-compose -f docker-compose.dev.yml up --build -dServices started:
- Backend API:
http://localhost:8001 - Commercetools MCP Server:
http://localhost:3000 - Voucherify MCP Server:
http://localhost:3002 - Redis:
localhost:6379
View logs:
docker-compose -f docker-compose.dev.yml logs -fStop services:
docker-compose -f docker-compose.dev.yml downOption 2: Running Locally
1. Start Redis (if not using Docker):
redis-server2. Start MCP Servers:
Commercetools MCP Server:
# Install and run
npx -y @commercetools/mcp-essentialsVoucherify MCP Server:
# Install and run
uvx voucherify-core-mcp3. Start the FastAPI Backend:
cd backend
uvicorn app.main:app --reload --port 8001The API will be available at http://localhost:8001
Development
Project Structure
backend/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI application entry point
│ ├── config.py # Configuration management
│ ├── models.py # Pydantic models
│ ├── agent/ # LangGraph AI agent
│ │ ├── intelligent_graph.py # Main agent graph
│ │ └── state.py # Agent state management
│ ├── mcp/ # MCP client implementations
│ │ └── client.py
│ ├── routers/ # API route handlers
│ │ └── auth.py
│ ├── services/ # Business logic services
│ │ ├── auth_service.py
│ │ ├── ct_auth.py
│ │ ├── customers.py
│ │ ├── prompts.py
│ │ ├── security.py
│ │ ├── session.py
│ │ └── token_manager.py
│ └── utils/ # Utility functions
│ ├── fixtures.py
│ └── json_sanitize.py
├── scripts/ # Utility scripts
│ └── sync_voucherify_promotions.py
├── docker-compose.yml
├── Dockerfile
├── pyproject.toml # Python dependencies
└── uv.lockKey Components
LangGraph Agent (app/agent/intelligent_graph.py)
The conversational AI agent with multiple specialized nodes:
- intent_detector: Classifies user intent and routes to handlers
- conversational_handler: Handles greetings and general chat
- product_processor: Lists and searches products
- cart_get_processor: Views cart contents
- cart_update_processor: Parses cart additions
- cart_add_processor: Adds items to cart
- cart_remove_processor: Removes items from cart
- product_resolution: Resolves product matches
- product_disambiguation: Handles multiple product matches
- await_disambiguation_response: Processes user selections
- order_processor: Places orders
- order_history_processor: Shows order history
- reorder_processor: Reorders from history
- promotion_processor: Lists campaigns and promotions
- response_generator: Generates formatted responses
MCP Clients (app/mcp/client.py)
Provides standardized interfaces to external services:
- Commercetools: Products, carts, orders, customers
- Voucherify: Promotions, campaigns, vouchers
API Documentation
Once the backend is running, visit:
- Swagger UI: http://localhost:8001/docs
- ReDoc: http://localhost:8001/redoc
Testing
Run the test suite:
cd backend
pytestRun with coverage:
pytest --cov=app --cov-report=htmlAPI Endpoints
Chat Interface
POST /agent/chat
Main conversational interface for AI-powered interactions.
Request:
{
"message": "Show me berry flavored products",
"session_id": "user-123",
"customer_id": "customer-abc"
}Response:
{
"response": "Here are the berry-flavored products...",
"session_id": "user-123",
"data": { ... }
}Authentication
POST /auth/login
User login with email and password.
POST /auth/signup
Create a new user account.
POST /auth/refresh
Refresh authentication token.
POST /auth/logout
Logout and invalidate session.
Health Check
GET /health
Returns the health status of the service.
Common Issues
Issue: "Missing environment variables"
Solution: Ensure all required variables in .env are set. Check env.example for reference.
Issue: "Cannot connect to Redis"
Solution:
- Check Redis is running:
redis-cli ping - Verify Redis host/port in configuration
- Use Docker:
docker-compose -f docker-compose.dev.yml up redis -d
Issue: "MCP server connection failed"
Solution:
- Ensure MCP servers are running on correct ports
- Check firewall settings
- Verify environment variables for MCP_BASE_URL settings
Issue: "LLM API rate limit exceeded"
Solution:
- Check your API key limits
- Consider switching to a different model/provider
- Update
LITELLM_MODELin.env
Scripts and Utilities
Sync Voucherify Promotions
Synchronizes Voucherify campaigns to Commercetools:
cd backend
python scripts/sync_voucherify_promotions.pySee Scripts Documentation for more details.
Performance Optimization
Redis Configuration
For production, configure Redis with:
- Persistence (RDB or AOF)
- Memory limits
- Eviction policies
LiteLLM Caching
Enable LiteLLM caching for improved performance:
litellm.cache = Cache()Connection Pooling
The application uses connection pooling for:
- HTTP clients (httpx)
- Redis connections
- Database connections
Monitoring and Observability
Logging
Logs are written to:
- Console: Standard output
- CloudWatch: (when deployed to AWS)
Configure log level:
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR, CRITICALHealth Checks
Monitor service health at GET /health:
{
"status": "healthy",
"timestamp": "2024-01-01T00:00:00Z",
"services": {
"redis": "connected",
"mcp_ct": "connected",
"mcp_voucherify": "connected"
}
}Next Steps
- Frontend Setup - Set up the React frontend
- Architecture - Understand the system architecture
- Deployment - Deploy to production
- Scripts - Utility scripts and tools